SharePoint

Best practices to manage SharePoint Permissions

Remember when you stored all confidential files on flash drive in your pocket?  Nobody had access to your pants, so you didn’t even think about file permissions. I’m guessing that’s not the case you woke up to this morning. All your docs are residing in the cloud, well protected with dozens of security systems. Maybe overprotected? Have you ever tried to share one of the confidential document to your colleague?  If you are managing information in a medium or large organization, then you should know the pain of confidentiality issues.

SharePoint is a great tool for effective structuring information

SharePoint has excellent features to share or restrict access to different structural elements. When it comes to permissions, a common means of controlling productive collaboration, users can be granted different levels of control of Sites, Lists/Libraries, Folders or List Items/Documents, known as objects.

Such permissions can be granted directly to individual user accounts, or to a group of users, or by Active Directory groups. You can grant access to the whole site, restrict access to a specific library and configure unique permissions for certain items to share them for everyone. Or split information in different folders of one library and share their content to different groups of users in your organization. Or, maybe, you invented a more complex way to fit crazy needs of your boss?

Sounds great and it works impressive in demos, but..

.. does it really work in practice?

I have been working with different SharePoint solutions for years. Every customer has specific requirements that aren’t always compliant with SharePoint capabilities. Very often customers are so fascinated about security in SharePoint, that they don’t realize how insidious this game can be. Many times, we tried to fulfil these specific needs, and it turned into pain, that enforced us to review initial requirements.

Do you want to step on the same rake?

I completely understand how you feel. Well, be aware and follow my best practices to avoid it.

There are five level of permissions:
1. Site collection
2. Site
3. List / Document library
4. Folder
5. List Item / Document

Permissions can be inherited from the top level to bottom, so you can share access to a site for a group and all users from that group will be able to manage documents in any library of this site. However, on any level, you can break inheritance and configure unique permissions, that will be in force on this level and all nested objects below this level. And this is the place where the nightmare begins.

Imagine you have a library with hundreds of folders. Everything worked great until somebody broke the inheritance of permissions in a certain folder and configured it to be available to a specific group of users. Some user complains, that (s)he cannot find a document located in that folder. How easy will be to find this folder and grant permissions for that user?  What if you decided to revoke access from some folder?  Permission management is a weak place of SharePoint UI, so you, most likely, will be frustrated very soon. I’d be frustrated too.

Permission management is a weak place in SharePoint

Don’t worry, I often make that mistake myself. But are there good practices for managing permissions and still keeping control of it? Yes, there are! And this is simple – you should plan in advance. This is exactly the case, where you can say that fail to plan is planning to fail. If you want to keep your system maintainable, follow these simple rules:

1. Creating a permission plan is necessary if you manage multiple objects with different permissions in a site collection.
It looks simple until you decide to restrict access to some site or document library. You break inheritance of permissions, configure access rights and everything goes well… As long as your admin is at work. It may become a big surprise for a new admin. Building a solution without considerations to the diverse and complex employee work patterns can be a recipe for disaster.

A detailed permission plan provides the admin with knowledge about site structure and permissions. A standardized approach where permissions are grouped at a higher level could be a good way to go. Understanding user groups with

regards to their area of focus and activities can lead to defining different approaches to user permission.

2. Grant permissions on higher levels. Avoid breaking inheritance of permissions than more, than deeper object level is.
You are able to grant specific permissions on a folder, that is located on the third level of subfolders in a document library. But try to find a problem in a year, when somebody complains, that the content of this folder is not visible. Yes, permission plan may help if you keep it always up to date. However, in a big structure, there is always room for a mistake. Just avoid complexity and sleep calmly.

3. Use SharePoint security and explicit group membership for managing site members.
If you need to use Active Directory groups, include them into a Sharepoint security group.

4. Avoid item level permissions.
This may work good with automation, but it is not maintainable manually. If anything goes wrong, you have almost no chance to identify the problem, because of documents can be just not visible for you.

Simplicity is the key to success

SharePoint is a powerful tool to build your complex informational system. But always keep the things simple!

Just remember the one golden rule – “order and simplification are the first steps toward the mastery of a subject” (Thomas Mann).

Find more info about the permissions:

Overview: best practices for managing how people use your team site

Understanding permission levels in SharePoint

Customize permissions for a SharePoint list or library

Still having doubts? Ask experts in Cloudriven. We are always happy to help you!
Thank you for reading, and if you have any questions, please ask below.

Contact Form

Want to know more about our products or services? Fill out the form and we'll be in touch as soon as possible.

Dynamics CRM

Dynamics CRM front-end server deployment to replace corrupted server

This blog post is about remote configuration of settings in Windows server environment related to the Dynamics CRM front-end server installation.

Scenario

I recently ran into a situation where on-premise Dynamics CRM front-end server was corrupted and none of the Windows management tools were accessible on that machine. For example, event viewer, Windows services controller, MMC, IIS management console, CRM deployment manager etc. did not start at all. However, the Dynamics CRM services were still running properly on this server. The deployment model in this environment was such where all the front-end server roles were installed to this corrupted server and the CRM DB’s were on a separate server. The SQL server itself and the CRM DB’s were ok without any issues. The CRM environment was configured for claims-based authentication in IFD-mode.

So, the task here was to install all the CRM front-end services to a fresh Windows server machine.

SSL certificate

I started the Dynamics CRM installation wizard by pointing to an existing CRM deployment. When that option is used, the installation wizard gets the existing CRM deployment configuration data from the CRM configuration DB and assumes certain settings and configuration options to be the same in the new front-end server installation than in the old one. One of these options is the SSL certificate. The Environment Diagnostics Wizard (EDW) threw an error stating that existing claims-based authentication is configured to be using a certain SSL certificate and the same must be deployed to the new front-end server. Before running the EDW, I had deployed another, more recent SSL certificate on the new server. We were able to retrieve the same older SSL certificate from another server where the same was being used so luckily that issue got resolved.

Claims-based authentication and IFD

The next challenge was related to the claims-based authentication and IFD. As mentioned earlier, this is rather simple Dynamics CRM server deployment when thinking about server topology. The ADFS service was deployed also to the same, corrupted front-end server. This meant that none of the ADFS management tools were accessible either on that server. After the initial SSL certificate issue was resolved, the next error that EDW threw was related to the claims-based authentication, “The encryption certificate cannot be accessed by the CRM service account”:

Dynamics CRM

My first instinct was that hey, most likely the CRM service account does not have read privileges to the private key of the SSL certificate. But it turned out that it is not the issue here. Rather this error is due to some type of an issue in CRM server installation that when installing a new front-end server to an existing CRM deployment, the IFD and claims-based authentication need to be disabled first. Then the CRM server installation can be done and afterwards, the claims-based authentication and IFD can be activated again.

It would be two mouse clicks to disable the IFD and claims-based authentication of Dynamics CRM if the CRM deployment manager tool would be available to use. But as I mentioned in the beginning, this was not the case here. None of these types of tools were available on the corrupted server.

PowerShell to the rescue

After a bit of head scratching, I realized that I can use PowerShell to disable the IFD and claims-based authentication. But PowerShell did not start either on the corrupted server. However, good old PowerShell can be used remotely as well. It requires just enabling the PowerShell remoting. This can be done by various tools, for example on the server locally (for obvious reasons not an option here in this case), by using group policy or directly by using PowerShell Direct if your server platform is Windows Server 2016 or Windows Server 2019. But in my case here, the server platform was Windows Server 2012. For that, there is a tool called PsExec which is a Microsoft’s free remote-control tool: https://docs.microsoft.com/en-us/sysinternals/downloads/psexec

So, I downloaded PsExec and within a seconds, I had PowerShell remoting enabled on the corrupted server by executing the following piece of script:

psexec.exe \\RemoteComputerName -s powershell Enable-PSRemoting -Force

You need to have certain firewall port open for this to work. I will not get into opening firewall ports remotely here but depending from the firewall provider, naturally that can be done.

Disable claims-based authentication remotely

So how to start a remote PowerShell session? Quite easily, just execute the following script and you have a remote session started:

$s = New-PSSession -ComputerName <the remote server name>

Enter-PSSession -Session $s

And now you have a remote session where you can for example browse directories of the remote server and execute scripts on the remote server.

The rest is just like sliding in the water park during the hot summer months: easy and fun. You need to be in the Deployment Administrator role in the Dynamics CRM and next register the Dynamics PowerShell snap-in:

Add-PSSnapin Microsoft.Crm.PowerShell

One more thing you need to do if it turns out that in the old CRM server, the Windows registry setting of Dynamics Deployment Web Service Uri “DeploymentWSUri” registry key is not set (as it was here in my case). As the regedit tool did not work on the corrupter server, I needed to connect to the server’s Windows registry remotely. Luckily the regedit tool gives you this possibility and to configure this missing piece of registry key, connect to your old server’s Windows registry, open the following registry hive: \HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM and add a new key of type string with the following value:

http://yourserver/xrmdeployment/2011/deployment.svc

Now you are ready to rock with the PowerShell and Dynamics CRM cmdlets. So, to check the current claims-based authentication settings, the following command can be used:

Get-CrmSetting -SettingType “ClaimsSettings”

That will show you a list of settings related to the claims-based authentication:

Next, you’ll execute the following piece of command:

$claims = Get-CrmSetting -SettingType “ClaimsSettings”

$claims.Enabled = 0

Set-CrmSetting $claims

Now the claims-based authentication should be disabled:

Dynamics CRM

Finally the installation of CRM

Now you are good to go and the EDW should pass all the tests without any errors. But you do need to restart the Dynamics CRM installation wizard from the beginning if you had it up and running while doing the above stuff. Just clicking back and forward to launch the EDW step of the wizard again does not do the trick.

Once the installation is completed and after patching the new CRM front-end server to the latest update level, your CRM adventures can continue with the brand-new server up and running.

I hope this blog post will help someone perhaps in a similar situation struggling with non-existing Windows tools and trying to complete things remotely.

How to run an SSIS package with Excel data source or destination in 64-bit environment?

So, I had a following scenario for one of our customers:

  • Need to execute an SSIS package with Excel and Dynamics 365 data sources and push the data over to Azure SQL DB
  • In the dev environment, the BIDS is 32-bit

I had actually a few different types of challenges in deploying the package to the production server from development environment. It took me a while to find out a solution to these, so I thought that it might be helpful for others struggling with the same issues to write out a small blog post.

How the data source and destination sensitive information gets deployed with the SSIS package?

This is configured in the dev environment in BIDS. It is basically a project option that needs to be set (EncryptSensitiveWithPassword):

You need to make sure also that the SSIS package level option is set to the same option. What this does is that it includes the sensitive information (for example the data source and destination connection string passwords) to the SSIS package but all that is protected with a password. Then in the execution server side, where you execute this package for example with SQL Agent job, you need to provide this password to be able to see or modify the connection options.

What does the project level connection manager mean in SSIS?

The next challenge that I had in this one was that I had a project level connection manager of Dynamics 365 specified in the SSIS project. This means that data connections using this type of connector do not get included to the SQL Agent job when you specify the SSIS package to be executed. What you need to do is that you specify the connection manager to be a package level instead of project level. This is done in the BIDS by right-clicking the connection manager and setting the “Convert to Package Connection” option. By doing this, all the connections using this connection manager are also used in the execution server side.

How to manage with excel data connections in 64-bit server environment?

When I deployed the SSIS package to our production server and created a SQL Agent job which is going to execute the package in a scheduled manner, it started to throw errors of these excel data sources. In detail the error was “The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered”:

The resolution is that you need to install Microsoft Access Database engine to the server and then set the SQL Agent job to be run in 32-bit mode. You can find the Access DB engine download package here: https://www.microsoft.com/en-us/download/details.aspx?id=13255

And at least in our case, we needed to install the 32-bit version of the Access DB engine to make this work. I believe it is due to the fact that as the BIDS is 32-bit, then it builds the SSIS package to be 32-bit as well. Another step to success was to set the “Use 32-bit runtime” option:

With these options set, the package was executed successfully and data flows from excel files to Dynamics 365.

By the way, absolutely the easiest way to implement these types of scenarios against Dynamics 365 is to use the KingswaySoft Dynamics 365 SSIS Integration Toolkit. I have used it in several projects and it is by far the best Dynamics migration/integration tool I have used so far if you want to develop a no-code migration against Dynamics 365. So, I strongly recommend that.

Platform Economy is not only the Game of Big Players

Once again, the room is packed as #digitalist gathered in downtown Helsinki to learn about the revolution of digitalization. For the first time, session´s primary language was English although much of the attention still concentrated to the home grounds of the movement. As Finns are known to be engineer-minded gadget-loving people, it is only natural that the IoT seminar received more attention than earlier sessions this year.

 

However, perhaps shockingly to some, the message was quite the opposite. Kemira Vice President Charlotte Nyström closed the day concluding it well: It is not about the technology, it is not about the IoT – it is about the culture. Statement, which was strongly backed by the presentation explaining how IoT has enabled Kemira’s shift from chemical provider to plant process operator as a service, outsourcing large part of the customer’s value chain.

 

“It is not about the technology, it is not about the IoT – it is about the culture”

 

But let’s go back to the beginning of such stories. Morning opening presentation was given by Telia’s Brendan Ives. His claim was that although much of IoT enabling technologies are done by the global giants, implementations are still local. He made good points on having tech-savvy Nordic countries as sandbox regarding the larger implementations. As markets are getting more mature the solutions need to get smarter. All layers of the stack must have open service interfaces, reminded Matti Seppä from Landis+Gyr. Company that was one of the IoT pioneers providing smart boxes for electricity companies already 20 years ago. The simple fact is that Today no player can fulfill the needs of all individuals. Therefore, even the pioneers must open their platforms for other players in the ecosystem.

 

One great example of such thinking is the story of Fredman Group – thanks to whom even kitchens have a story to tell. This is made possible with IoT, but getting there is not about technology, it is about new way of thinking and measuring the value.

 

The simple fact is that Today no player can fulfill the needs of all individuals

 

Once a company famous for its quality plastic and paper accessories to kitchens, impressed many believers of modern management culture back in 2015. Company CEO Peter Fredman stated at the same #digitalist arena that their organization hierarchy is turned upside down. On the top of the heap sat the customer and CEO acted as a janitor supporting the organization to provide the best possible value for the user. Although they were not quite sure of all the steps, instead of just concentrating into a tiny piece of food creation process, they set the goal to fight for the best flavors.

 

Many in the audience were confused of such story in a very technology-oriented seminar. However, the point was that company was set to design the value chain of how food was created. Similarly, to customer experience, they wanted to optimize the ingredient experience in order to create a perfect kitchen. Naturally, it is not a task for a local plastic wrapping player in Finland, but certainly doable by combining technology-enabled insights, know-how and professional networks.

 

After proving their point in bringing more intelligence into kitchen management through IoT, it is only natural to step up the gear. Having a perfect meal on a plate is a much broader problem than just cooking the dish. There is whole ecosystem of equipment providers, logistics managers, storage regulations, quality requirements, etc. involved. For that to function with minimum friction, there is a need for an ecosystem platform. A place where new insights, know-how and professional networks can be brought together in an economically feasible way. A marketplace for critical vendors along the path of ingredient experience. A service for individual people to learn new skills. And a single point of information for monitoring operational excellence.

 

Such a platform must be created based on globally dominant technologies. However, the innovation, culture shift, and specific value promise are created and sandboxed locally… Until, the platform is given a chance to expand into global markets creating a new layer of value for all the players in a very traditional industry. And perhaps even disturbing the ecosystem for good.

Kilpailukyvyn kulmakivet

Työtuntien tuottavuus ja määrä ratkaisevat pelin. Suomessa tehty kilpailukykysopimus kannusti kasvattamaan työtuntien määrää. Uskoakseni kuitenkin varsin suuri osa ns. korkean jalostusarvon yrityksistä päätti hyödyntää sopimusta ennen kaikkea työtunnin tuottavuutta parantavilla tavoilla. Tämä oli myös Cloudrivenin valinta.

Uskomme tuottavampien työtuntien salaisuuden tiivistyvän seuraaviin elementteihin:

  1. Ihmisten työ suuntautuu mahdollisimman suoraan ja suurelta osin yrityksen tuotteista tai palveluista maksaville asiakkaille.
  2. Ihmiset osaavat hyödyntää informaatiota ja teknologiaa suuremman asiakasarvon tuottaakseen sekä yksin että yhdessä
  3. Ihmiset voivat hyvin

Asiakkaille suuntautuvan työn määrän tulee luonnollisesti olla varsin korkea riippumatta roolista. Jos tuotekehitys ei ymmärrä asiakkaan tapaa käyttää sovellusta, ei sovelluksesta kovin kummoista synny. En ole myöskään kuullut sellaisesta laskutettavasta projektityöstä, asiakaspalvelutyöstä tai markkinointi- ja myyntityöstä, joka asiakkaasta irrallaan muodostuisi kovinkaan tuottavaksi. Kaikki asiakastyö ei luonnollisesti ole suoraan laskutettavaa, mutta hyvin toteutettuna kaikki asiakastyö voi tuottaa nähtävissä olevassa tulevaisuudessa laskutusta. Cloudrivenin tapa huolehtia asiakkaille arvoa tuottavasta työstä tiivistyy luottamukseen, viikoittaisiin johtamistilanteisiin ja keskeisten informaatiovirtojen saattamiseen jokaisen Cloudriveniläisen viikon osaksi. Kiitos Eero Markelinille, joka kirjoitti kokemuksistaan päivän työvaihdosta Cloudrivenillä.

Ei tarvitse olla kummoinen yliopistotutkija, että ymmärtää informaation ja teknologian hyödyntämiskyvyn parantavan yksilön kykyä tuottaa arvoa asiakkaalle. Helsingin Sanomat julkaisi mainion artikkelin tuottavuudesta rakennusalalla. Alan teknologia on kehittynyt ja kehittyy edelleen, mutta tuottavuudessa ei ole artikkelin mukaan kovin kummoisia loikkia otettu viimeiseen 40 vuoteen. Ei palveluillakaan kovin kehuttavasti artikkelin mukaan mennyt. Minusta olennaisin viesti artikkelissa on, että tuottavuutta kehittääkseen on huomioitava 1) yhteistyö ja informaatiovirrat, 2) kyky hyödyntää teknologiaa  ja 3) johtaminen. Kaikki kolme osa-aluetta ovat keskeisessä osassa meidän toiminnassamme, kuten jo Eeron blogista kävi ilmi. Teknologian hyödyntämisen osalta käytämme useita toisiaan tukevia keinoja, joiden tavoitteena on läpi organisaation parantaa kykyämme tehdä tuottavammin töitä. Yksi näistä on markkinoilla hyvän vastaanoton saanut TrainEngage –palvelumme, joka ohjaa ihmisiä hyödyntämään selainkäyttöisiä sovelluksia järkevästi.

Viimeisenä, mutta ei vähäisimpänä, on ihmisten hyvä vointi. Ihmiskäsityksemme on holistinen ja moni käytäntömme tähtää siihen, että Cloudrivenillä olisi hyvä tehdä töitä. Sen lisäksi hyödynnämme alussa esille nostettua kilpailukykysopimusta yksilön valinnan mukaan joko liikuntaan tai uuden oppimiseen.

Näiden painopisteiden myötä Cloudrivenin alkuvuosi on sujunut mielestämme tyydyttävällä tasolla. Myyntimme on kasvanut verrokkivuodesta reilu 70 % ja liikevaihtokehityskin näyttää myönteiseltä. Luonnollisesti sijoitamme kasvuumme merkittävästi, mutta emme kassan kantokykyä enempää. Taloudellisista tuloksista kiittäminen on paitsi valintojamme, niin erityisesti työntekijöitämme ja asiakkaitamme. Tyytyväiset tekijät tuottavat asiakkaille arvoa.