Monday, March 18, 2024

Copilot Productivity Tip – Teams Chat Catchup

Time spent per day before: 30-60 minutes
Time spent with Copilot: 5-15 minutes

Resources


My workday at Microsoft involves communication with multiple teams spread around the globe, which means in-person communication is not always possible. This is why Teams has become the go-to tool for many conversations and discussions. When I start my day at 8am in the morning I know chats has happened in other time zones, and instead of reading everything right away I can use Copilot catchup for a summary, and then decide if I need to read it all.



I typically use the “Summarize what I've missed” Copilot suggested prompt inline in the chat which opens the Copilot pane, or I open the pane manually as seen below with prompts such as “Highlights from the past day” or “Highlights from the past 7 days”.

image image

Friday, March 15, 2024

Copilot Productivity Tip – Teams Meeting Insights


Time spent per day before: 0-120 minutes
Time spent with Copilot: 0-15 minutes

Resources

The beauty of online meetings with transcripts is the ability to quickly go back and find key points later without having to watch the recording or read thru the full transcript. 

By default Teams provides an AI notes section with a quick summary, and using the Copilot pane you can ask more direct questions such as summarizing your talking points, or ask what your action items were for the meeting.

My best example is a late night meeting where I forgot to take notes and the next morning I knew I was supposed to contact an “Andrew”, but had forgotten the full name. Requesting the name in the meetings Copilot quickly gave me the name, saving me around 15 minutes and some grief.




Friday, March 8, 2024

Allowing arbitrary custom scripting in SharePoint Online, or not? – that is the question! (aka Stealing your data since 2001!)

…and the answer is, as it always has been, NO!



Disclaimer: the opinions of this post are mine entirely, and nothing to do with my work at Microsoft. I have not changed opinion on this matter in the past many many years.

Sparked by the recent message center post MC714186 – Remove Custom Script setting in OneDrive and SharePoint web, I figured I’d write out my stance and my full support of the planned change.

Summary: The Custom Script setting in OneDrive and SharePoint web will be removed in March 2024. A new PowerShell command, "DelayDenyAddAndCustomizePagesEnforcement", has been introduced to delay the change to custom script set on the Tenant until mid-November 2024. The NoScriptSite setting will be configured to True for all existing SharePoint sites and OneDrive sites except for specific site templates. Existing scripts in OneDrive and SharePoint sites will remain unaffected. Administrators can permit the execution of custom scripts on specific SharePoint sites using the Set-SPOSite command.

In the above summary I want to point out that “existing scripts will remain unaffected” means classic page injections. Not SPFx solutions on with the setting requiresCustomScript=true.

Thursday, January 18, 2024

Easier editing of Microsoft Search verticals in SharePoint sites (finally fixed!)

This is perhaps a tiny one, but for the longest time search verticals in a SharePoint site or SharePoint hub site has been a bit of a mystery when you wanted to edit a vertical. To invoke edit, you had to double click the vertical, as there was no Edit button in the ribbon.

The issue has been known since the feature rolled out, but not addressed. I'll be as blunt to say I decided to take matters into my own hands and just fix it. So here you go, a gift from me to allow a more intuitive way of editing search verticals in SharePoint - as the Microsoft 365 Admin center has had the Edit button always.

At last the experience matches the documentation at https://learn.microsoft.com/en-us/microsoftsearch/manage-verticals#manage-site-level-verticals.



Thursday, January 11, 2024

Demystifying Author properties on files in SharePoint and search

demystify

For as long as I have been doing SharePoint, figuring out what properties to use when filtering or displaying search results in regards to people has been a challenge as the documentation in this space is somewhat lacking. I’m not sure why I haven’t done this writeup earlier, but no time like the present.

Friday, December 8, 2023

AI model bias and why responsible technology matters – exemplified by image generation

In this era of AI where ChatGPT with LLM’s has become the hottest topic in computer science since the Apple Macintosh and the IBM PC, I figured I’d do a small write-up on AI model bias and why paying attention to bias is important. This is especially true when it comes to enterprise scenarios where Microsoft is launching a long range of AI powered copilot experiences.

The author of this article works for Microsoft (December 2023) and is an internal champion for responsible ai and technology as well as internal champion for privacy and compliance.

At Microsoft we have a high bar for delivering responsible AI solutions, which means there is a lot of work put in place to ensure the output from AI systems follow Microsoft’s AI principles to be fair, inclusive, reliable and safe, privacy and security is accounted for, and the systems are accountable.

Any model, that be a large language model (LLM) or a model to generate image will inherently have bias built in due to the training data used. In smaller models you can manually verify the training data to counter some bias and balance the training set, but as models grow large this becomes inherently harder. I’m not saying there are not systems in place to counter training bias already, but to truly counter bias this has to be built into pre and post processing of input prompts and outputs from the models.

I will use image generation as an example where I will show the difference between using image creator in Microsoft Designer (https://designer.microsoft.com/) built on DALL·E 3 from OpenAI and Stable Diffusion XL, which is an open source model from Stability.AI (https://stability.ai/). The Microsoft solution has guardrails in place, where the open source solution do not – unless you add them yourself via prompting. Not saying neither of them are perfect as examples will show.

I want to call out that any bias shown is not statistically verified, and only based on generating a set of sample random images with the same prompt.

Example 1 - photo of correctional officer in a well lit hallway eating a donut

image

The above eight images are from DALL·E 3. They are all close-up photos showing a fit, light skinned male with dark hair.

image

In comparison the SDXL images have a wider focal point showing the full body. It’s a mix of male and female people, and also a mix of light and dark skinned people. I would argue the SDXL model is more accurate to what people look like in 2023, while the DALL·E 3 model output “perfect” looking people. If this is due to the images the models are trained on, or the prompt being augmenting to have “perfect” looking people I do not know.

The default color palette is also different where DALL·E 3 has more green and SDXL has more brownish colors.

If I add “overweight” to the DALL·E 3 prompt, the Responsible AI filter kicks in and blocks the generation. If I add “fat”, then it works.

image

With SDXL I can modify the prompt to closeup photo of a slim white male correctional officer in a well lit hallway eating a donut” to mimic what DALL·E 3 output by default – to counter the wide angle and real life looking people bias of the model.

image

Example 2 – woman

Let’s try a simple prompt with the subject “woman”. For SDXL I added negative prompting to avoid any nsfw images – which is blocked as part of DALL·E 3 RAI principles.

image

DALL·E 3 seems to pivot towards portrait photo’s when no extra contextual information is given, as that is likely the intent with a simple input subject. They are also all dark haired and seem to be young women.

image

In comparison SDXL gives a wide variety of image types, pivoting to more art-like images instead of photos.

Example 3 – painting of a beautiful norwegian fjord with vikings, with a boing 737 in the sky, in the style of munch’s scream

image

The DALL·E 3 painting nails the airplane and pretty much the painting style of Edward Munch.

image

The SDXL one is not bad either, but the Munch style is not as visible for this one sample. And the scale of the plane vs. the viking ship and buildings is way off.

Learnings

These simple examples shows that articulating your intent in prompting is crucial. Either the system has to add guardrails and contextual information to the prompt, or the person prompting has to be articulate on what they want returned and what they do not want returned. And you have to generate many images to find that ONE you really like.

For online services like Microsoft Designer going the safe route is the only approach as people using the service comes from a wide variety of backgrounds and age groups. Taking that extra measure to ensure everyone feels safe is important to trusting the service.

Open source solutions you can run on your own PC/phone/table can allow for less guardrails as the individual running it likely has more skill and is using the tool themselves. Maybe the analogy of hiring a carpenter as a service vs. hammering yourself can be used. You trust a hired professional to meet a certain bar, while you are responsible yourself on anything you do.

When it comes LLM’s we know they are largely based on English text today, and would favor input and output in this language. As they are built on public data, that will influence default writing style as well. fortunately ChatGPT and Microsoft Copilots put a lot of effort into the system prompts put around the user prompt, to counter any bias in the model. This is to ensure grounding in facts and avoid hallucinations. More on that for another post.

References

I used the service of https://designer.microsoft.com/image-creator to create the DALL·E 3 images, and I used the Draw Things app on a MacBook with an 8-bit quantized version of the default SDXL model. The Draw Things app also work on iOS devices.

Thursday, August 10, 2023

How to paginate large results sets for SharePoint items using the Microsoft Graph Search API

If you want to paginate over a large set of results for some reason using the Microsoft Graph Search API, you can employ the logic mentioned for the SharePoint API at https://learn.microsoft.com/en-us/sharepoint/dev/general-development/pagination-for-large-result-sets. Note that this option applies to OneDrive and SharePoint items, not necessarily other content sources available via the Graph Search API (not tested).

Use a basic JSON template like below for your search requests, or modify to add other parameters needed for your request.

{
"requests": [
{
"entityTypes": [
"driveItem"
],
"from": 0,
"size": 500,
"query": {
"queryString": "contoso indexdocid>**LASTID**"
},
"fields": [
"indexdocid"
],
"sortProperties": [
{
"name": "[DocId]",
"isDescending": "false"
}
]
}
]
}

Where **LASTID** is 0 on the initial request. Once you get results back, pick the value of indexdocid of the last result, and use that as **LASTID** on the next request. In the below screenshot you would use 2377359 for the seconds request. Continue this logic until you stop getting results, and you should have iterated all files (driveItems) containing the term contoso for the above sample.




Monday, July 3, 2023

MacBook Pro M1 with 4K monitors on a ThinkPad USB-C dock

I have a couple of 4K monitors at work via a ThinkPad dock. If I were running the native resolution of 3840x2160 pixels the fonts and icons were just too small for my aging eyes. The alternative native resolution was 1980x1080, and then things get too large.

The ideal for me is scaling to 2560x1440. Sure you can do this via the display system settings, but then everything is blurry. But there is a fix.

  1. Install DisplayLink Manager from https://www.synaptics.com/products/displaylink-graphics/downloads/macos 
  2. Check the experimental mode for 3008x and 2560x modes support
  3. Then pick scaled text in Display settings, getting you a nativ scaled 2560x1440 which is not blurry in HiDPI mode. See https://support.displaylink.com/knowledgebase/articles/1993915.

Tuesday, April 25, 2023

New useful managed properties to use in Microsoft Search

image

For those working with hub sites in SharePoint you have for a long time used the managed property DepartmentId, later accompanied by RelatedHubSites when hub site hierarchies was enabled.

Now the time has come to have these properties, and some more, added to the public documentation.

Take a peek at https://learn.microsoft.com/en-us/sharepoint/crawled-and-managed-properties-overview which covers these new properties available for online experiences.

The documentation UX is not ideal, so ensure you scroll the table of properties to the right to read the comments per property. Here’s a copy of the table for reference where I moved the comment for visibility.

Note the (*) highlighting that it’s not guaranteed that each item has a value in the property.

Property name

Type

Comment

Multi-valued

Queryable

Searchable

Retrievable

Refinable

Sortable

Mapped crawled properties

DepartmentId

Text

Site ID of the hub of the immediate hub. Applies to all items in the hub/associated sites.

No

Yes

No

Yes

Yes

No

ows_DepartmentId

RelatedHubSites

Text

Site IDs of associated hubs including hub hierarchies. Can be used instead of DepartmentId for most scenarios. Applies to all items in the hub/associated sites.

Yes

Yes

No

Yes

No

No

ows_RelatedHubSites

IsHubSite

Yes/No

Applies to the site result of a hub (contentclass=STS_Site)

No

Yes

No

Yes

No

No

ows_IsHubSite

ModifierAADIDs

Text

Semi-colon separated list of AADIDs for modifiers of a file or page ordered in date descending order. (*)

Yes

Yes

No

Yes

Yes

Yes

 

ModifierDates

Date and Time

Semi-colon separated list of modification dates for modifiers of a file or page ordered in date descending order. (*)

Yes

No

No

Yes

No

No

 

ModifierNames

Text

Semi-colon separated list of the names for modifiers of a file or page ordered in date descending order. (*)

Yes

Yes

No

Yes

No

No

 

ModifierUPNs

Text

Semi-colon separated list of UPNs for modifiers of a file or page ordered in date descending order. (*)

Yes

No

No

Yes

No

No

 

ChapterTitle

Text

Semi-colon separated list of auto-generated chapters on Teams meeting videos. (*)

Yes

Yes

Yes

Yes

No

No

ChapterTitle

ChapterOffset

Text

Semi-colon separated list of time codes matching the chapter titles for auto-generated chapters on Teams meeting videos. (*)

Yes

No

No

Yes

No

No

ChapterOffset

* Property is not guaranteed to contain data.

Retirement of Dynamic Ordering feature in classic search experiences

In MC44789 post from April 22nd, 2023, Microsoft announced the retirement of the dynamic ordering feature experience in classic search.

If you don’t know what the feature is, the below image highlights the feature seen in the query builder in classic search result sources, query rules and search web parts.

image

The above screenshot show a rule which if it matches the term xrank in the title results will be boosted to the top of the result list.

Wait what?? So I will no longer be able to boost items per my own logic? Sure you can, and this is called out in the MC post – “Functional parity may be achieved by adding XRANK clauses directly to the query template in the Query Builder dialog.”

Previously when testing the query from the test tab you could see the output of the final query. However this is no longer the case and I’ll teach you how to transition dynamic rules over to manual XRANK.

image

Today using the constant boost, or cb, parameter to rank is not the recommended approach. The reason is that the internal rank scale has changed over the years so the value 5,000 may or may not be required to move something to the top. The below example has a rank of –17921 so adding 5000 would not help.

image

The recommended parameter to use today is to use standard deviation boost with the stdb parameter.

See https://learn.microsoft.com/en-us/graph/search-concept-xrank or https://learn.microsoft.com/en-us/sharepoint/dev/general-development/keyword-query-language-kql-syntax-reference#dynamic-ranking-operator for all parameters.

Manually writing dynamic ordering rules as XRANK

A query temple to boost a result to the top can then look like:

{?{searchTerms} XRANK(stdb=100) Title:xrank}

Feel free to replace 100 with a smaller or larger number as needed.

If you want to boost items with title=foo pretty high, and less with title=bar you can use a nested XRANK statement, similar to what multiple dynamic ordering rules will accomplish.

{?({searchTerms} XRANK(stdb=5) Title:foo) XRANK(stdb=2) Title:bar}

If you want to demote results instead of promoting them, use a negative number.

If you go for decimals instead of integers, I recommend reading https://www.techmikael.com/2014/11/you-should-use-exponential-notation.html to ensure they always work.