Cloud Center of Excellence (CCoE)

At Cloud Navigator, our laser focus on delivering services related to The Microsoft Cloud helps reduce the complexity of these problems.  Our success with cloud onboarding projects is a big reason for our overall success as a cloud solution provider.  Many organizations choose to partner with a company focused on deploying workloads in the cloud even when they intend to manage those workloads for the long term.

We continuously refine our processes, management strategies, technical approaches, tools, templates, roles and responsibilities, and technology to meet the rapidly changing demands of IT in the cloud.  It requires substantial learning, coordination, and dedication.

This is why we’ve established a Cloud Center of Excellence.

Our Cloud Center of Excellence (“CCoE”) is a deeply experienced team and a set of valuable resources that are focused on cloud transformation.  While many cloud deployments are virtually transparent within an organization, a core goal of the CCoE is to develop a standard and stable methodology for implementing change within an organization.  In our role as cloud service provider, the CCoE must extend into our customer’s organization to be effective.

The CCoE allows us to:

  • Leverage the knowledge of diverse stakeholders
  • Reduce rework and cost
  • Manage change and measure success
Disciplines and Best Practices

The CCoE team at Cloud Navigator is responsible for researching and adopting best practices and applying them in real customer scenarios.  These best practices span disciplines that include:

  • IT Project Management
  • IT Operations Management
  • Business Operations
  • Solution Architecture
  • Distributed Networking
  • AppDev and DevOps

Best practices include:

  • The Cloud Navigator Onboarding Project Framework
  • IT Project Management best practices include tools, templates and standards from the Project Management Institute’s (PMI) Project Management Body of Knowledge (PMBOK) Guide.
  • IT Infrastructure Library (ITIL) codes of practice for IT Operations Management.
  • Microsoft Guidance for Hybrid Cloud deployment and migration
  • Azure solution architectures published by Microsoft
  • Office 365 migration performance and best practices from Microsoft
  • Clear delineation of roles and responsibilities for repeatable project types

The CCoE breaks down cloud transformation into its two major categories of activity: onboarding and operations.  Every onboarding project leads to a new workload that requires operational management.

Average Onboarding Projects Per Month: 12

Total onboarding projects to date: 245

Trusted Execution Environments in Azure

Microsoft is providing greater and greater levels of security for your apps and data.  A recent announcement regarding Trusted Execution Environments in Azure and blockchain additions to Always Encrypted for Azure SQL Database.

I was first interested in learning more about Always Encrypted since we use Azure SQL Database heavily with a number of clients.  The use of blockchain technology to implement encryption-in-use for Azure SQL Database and SQL Server is an enhancement to Always Encrypted which ensures that sensitive data within a SQL database can be encrypted at all times without compromising the functionality of SQL queries.

I then learned about how Trusted Execution Environments or TEEs are coming to Azure, and felt I needed to spread the word.  Azure confidential computing ensures that when data is “in the clear,” which is required for efficient processing, the data is protected inside a TEE (also known as an enclave). TEEs ensure there is no way to view data or the operations inside from the outside, even with a debugger. They even ensure that only authorized code is permitted to access data. If the code is altered or tampered, the operations are denied and the environment disabled.

Read more about it!

https://azure.microsoft.com/en-us/blog/introducing-azure-confidential-computing/

 

 

Easier External Sharing in SharePoint, but BEWARE the Dangers

Microsoft just announced the ability to share content with external users without requiring an Office 365 or Microsoft account.

“If your OneDrive and SharePoint Online external sharing settings are set to allow sharing with new external users, new external users (that have a file or folder securely shared with them) will be able to access the content without needing an Office 365 account or a Microsoft account. Instead, recipients who are outside of your organization will be sent an email message with a time-limited, single-use verification code when they access the file or folder. By entering the verification code, the user proves ownership of the email account to which the secure link was sent.”

This is a great advancement and it will hopefully remove the primary obstacle that prevents external users from having a good experience when attempting to accept and act on invitations to access content.

Why you should be careful when sharing with external users

At Cloud Navigator we have used external sharing extensively to collaborate with customers on IT projects.  We also use the feature from time to time when we are collaborating with partners to develop proposals together.  SharePoint is a great platform for these activities.

We also use SharePoint for our internal purposes–HR and employment documentation, contracts, policies, and other private internal business content.

When you share content with an external user in OneDrive or SharePoint Online, a user account is created.  In SharePoint, a user profile is created.  This user is also placed in a SharePoint group with access privileges for the content you have shared.  What you may not be aware of at the time of sending the share invitation is that the group that user will be placed in may have access privileges extending far beyond the content you have shared.  That means there is the potential for the external user to access private content to which you didn’t intend for them to gain access.

That’s bad.  It get worse.  At the time of the share and user profile creation in SharePoint, the group that the user is added to may only have rights to access the content you shared, but later on someone else in your organization might extend the rights to other content, thereby sharing unintended content with the external user.  Without the proper controls in place, an external user might be able to give other external users inappropriate access in the same way.

I found an old blog post that explains some of the rights an external user can receive:

Understanding External Users in SharePoint Online

How to avoid the danger

The only way to prevent someone from accidentally giving inappropriate access to an external user is through vigilant IT governance and informed SharePoint deployment planning.  The first step is understanding the unintended consequences that may accompany external sharing.  We recommend developing IT Governance strategies that include monitoring/review of user accounts in Office 365 and SharePoint, as well as a review of site permissions and group access rights.

A new way to extend your file shares in The Cloud

Microsoft just announced the preview release of Azure File Sync. This is going to be of great interest to a lot of folks that have large, cramped file shares.

Read the announcement here: https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-for-azure-file-sync/

Pay close attention to this aspect of the new service:

“The real magic of Azure File Sync is the ability to tier files between your on-premises file server and Azure Files. This enables you to keep only the newest and most recently accessed files locally without sacrificing the ability to see and access the entire namespace through seamless cloud recall. With Azure File Sync, you can effectively transform your Windows File Server into an on-premises tier of Azure Files.”

Avoiding disaster at home and at work

In the wake of Hurricane Irma, I’m taking some time to reflect on my personal and business disaster preparedness plans and outcomes.  Living in Florida, we learn to expect a tropical storm or hurricane to threaten our homes and businesses each year.  Most years, we dodge the worst of it, but usually there is some impact.

Here I sit in my office two days after the storm has passed over us and the power is still out at my house a couple of miles away.  I live on a street that is still serviced by power lines on a canopy road with a lot of majestic live oak trees that frequently interfere with the electricity.  I have a portable generator but it is loud and difficult to keep it running for multiple days–so the food in our refrigerator and freezer is going to get tossed out again.  We don’t have running water either.Hurricane Irma track

There was a time not long ago that our company’s internal IT systems were prone to fail in some way and it wasn’t just during storms.  We suffered outages to email, application and data servers whenever the power went out, and sometimes, it was TRICKY to get them back up and running.  I can think back to one time after a storm that a database server had a disk drive failure when we tried to bring it back online and we lost a couple of days getting it fixed.

Things are different today.  We have no servers at either of our office buildings, just a little bit of network hardware.  All of our systems are in The Cloud.  We rely on the built in redundancy and fault tolerance offered by Microsoft’s cloud platforms.  These systems are designed to expect failures and outages and remain operational and protected.  Disk drive failures probably happen frequently, but we never even know about it and there is absolutely no impact to operations.

If a data center where one of our systems is deployed were to be destroyed, we of course would have some down time, but we know that all of our data, apps and servers are safely replicated to other data centers and ready to be brought online if necessary.

The day before Hurricane Irma hit us, we put a few sandbags along the front door to the office building.  The double glass doors have leaked water once or twice during heavy wind and rain.  That was the extent of our disaster preparedness efforts as the storm approached.

Over the past year, we’ve learned how self-hosted on premise IT systems can be protected in The Cloud, and we’ve been rolling out that capability to our customers–so you don’t have to go “all in” on cloud computing to enjoy some of its benefits.

One of the most important things I’ve learned myself is that it doesn’t have to be difficult to have peace of mind and a high confidence that stuff is going to keep on working.  It can be easy, inexpensive, and rapidly implemented.  All good!

its all good

Azure Site Recovery – Disaster Recovery Made Easy

Experiences with Azure Site Recovery

As a provider of cloud solutions, our earliest use of Azure Site Recovery (ASR) wasn’t protection at all but rather migration.  We used ASR to replicate small sets of servers from customer premises to the cloud.  For customers running Hyper-V or VMWare with supported server images, ASR makes migration to the cloud almost trivially easy.

Strategies for Delivering Disaster Recovery

After our success with this limited use of ASR, we were interested in more challenging engagements.  We wanted to use ASR to deliver Disaster Recovery as a Service to customers.  Our partners at Microsoft were happy to help us plan our strategy.  For example, they told us that other business partners were focusing on the onboarding phase of a typical ASR engagement.  As a result, we designed offerings where onboarding and ongoing monitoring and management are individual options. For ongoing services we opted to give our customers a choice between monitoring their recovery solution themselves or paying us a modest fee to do so.

An Example DR Engagement

This strategy proved fortuitous.  One customer, Florida Surplus Lines Service Office, had a budget for the initial work but wanted to keep ongoing expenses down by doing all monitoring themselves.  We bid the onboarding a bit lower than initially planned.  That’s a decision I would second-guess myself on occasionally during the next few weeks.  In the end, even if our effective hourly rate was a bit less than we would have preferred, it was a valuable opportunity to learn and we delivered a solid solution.

A Successful Outcome

We did all the configuration on the Azure side, and worked closely with the customer on activities which had to be done on-premise.  Critical on-premise tasks were to run the deployment planner; set up a VPN gateway between local and Azure networks; set up a Configuration server for their VMWare environment; and update a critical Oracle/Linux server to a version supported by ASR.

Based on the output from the deployment planner, we divided their servers into three batches to initiate protection.  Each group took a day or two to reach protected status.  On the Azure side, we had two networks set up- a VPN-joined network hosting a secondary domain controller, and an isolated network for test failovers.  Our very first test was to fail over a domain controller to the isolated network.  We then promoted it to primary to have domain services available on that network.

Next we did a test failover of all protected infrastructure to the test network.   Test failovers do not impact protected workloads and can be used for non-disruptive DR readiness testing.  The customer confirmed that interactions between different parts of the failed-over infrastructure performed correctly.

Our final test was a true fail over and fail back of a test machine on their production network.  Their servers communicated effectively across a VPN gateway, and the failed-over server retained changes after failback.

At this point we and our customer were satisfied that their servers were properly protected.  Although they opted to monitor the solution themselves, we keep an engineer on their alert notification list.  We review the notifications from time to time, and as Microsoft continues to improve their monitoring tools we plan to keep them updated on features and practices that may be of use to them.

Join Entities in Dynamics 365 and Why they are awesome

 

If you have ever worked with many-to-many (N:N) relationships in Dynamics 365 (the product formerly known as Dynamics CRM), you may have at some point created a N:N relationship between entities.  It is a useful relationship type for sure, but it has some serious ‘out of the box’ limitations.

 

The main issue I have always had with it is the complete lack of capability to execute a workflow process when the relationship is created, or any audit record of who and when the relationship was created or modified.

Example 1:

Contact has a N:N relationship between itself and a custom entity called ‘Web Roles’.  You assign new Web Roles to the Contact record to allow them access to pages on a custom portal.  But you need to know who added the role, and when the role was added.  Say you have delegated the web role assignment to customers that have an admin role to manage their own users on the portal?  How would you know who added the role and when?

Example 2:

Contact has a N:N relationship to Account.  For each Contact, they have a regular parental N:1 relationship to an Account, but they might also have a relationship to several other Accounts.  Perhaps they are a distributor of your products and have a company that they work for, but also work with several of your other Accounts to sell them products.  And each Contact may have a different role that they have in relationship to the other Accounts.

But Wait??

If you have worked with Dynamics 365 for any amount of time, you might be thinking “Hey, you can use the built-in Connections entity for this”. And you would be correct.  But only if you are only going to use Connections for just one type of N:N relationship.  Since Connections are basically available for ANY record to be linked to any other record, it’s a lot more generic than it needs to be.

 

Solution:

The solution is what I like to call a ‘Join Entity’.  If you have ever done traditional database or application development that worked directly with a SQL database, you should already be familiar with this concept.  It’s basically a table that sits between two other tables and stores the primary key for records in each table that require a join.

 

 

In Dynamics 365 parlance, we create a Join Entity that works just like a join table.

Step 1: Create a new Custom Entity

When you create this entity, give a good name that reflects what you are joining.  In this example, we are going to create a join between Account and Contact to allow for multiple Contacts to be associated with multiple Accounts.  I’m going to call this one Account To Contact.

 

For the Ownership option, this one is up to you.  In most scenarios, it is safe to set this to Organization since we are just using this for joining up other entities and we don’t need all the overhead associated with User or Team ownership.  If the relationship needs it and perhaps some user or team needs to own the relationship then by all means, set it that way.

 

Most likely you won’t need any of the Communication & Collaboration options enabled, and you can always enable most of them later anyway.  As a rule, I like to keep them all OFF until I know I need them.

 

For Data Services options, I would Allow Quick Create, and Enable Auditing.

 

For Primary Field, you can leave the default name as Name, or give it something else more appropriate if you prefer.  We’ll talk about how to deal with this field in a later step.

 

IMPORTANT: You need to set the Field Requirement for the Primary Field to Optional at this step.  If you forget, don’t worry it can be changed by editing the field directly, but it’s best to do it at this time.

 

Side Note: For the color setting, I like to set all my custom entities color to plain white (#ffffff) and then get some nice flat black icons from Icons 8 (http://www.icons8.com).  It’s an awesome site with thousands of icons and I highly recommend it.

 

Click Save to create your new entity.

 

 

Step 2: Add the Relationship Lookups

Now we need to add the appropriate N:1 lookups to the entities we are trying to join on.
Click the Fields option and then click New in the toolbar.

 

Display Name should be something that makes sense to the user that will be adding this new record/relationship.  For our example, we are joining to Account so we will call it Account.

 

Tip: This will also default the Name field to Account, but it makes sense to add a suffix of Id to this SDK name field.  This will be helpful later if you have to write any JavaScript or .NET code to reference the field. You should be able to quickly recognize it as a GUID to lookup to another entity.

 

Field Requirement should be set to Business Required to avoid any orphaned join records.

 

Select a Data Type of Lookup and a Target Record Type of Account.

 

 

  • Display Name = Contact
  • Name = new_ContactId
  • Field Requirement = Business Required
  • Data Type = Lookup
  • Target Record Type = Contact

Step 3: Edit Default Form

Click Forms in the left navigation to review the list of built in system forms.  Click the Information form listed first with a Form Type of Main.

 

Customize this form and add your newly created Account and Contact fields.  Click Save and Close to save your form changes.

 

Tip: Since you only get one section by default, the Name (and possibly Owner) field(s) will fill up the entire width of the form.  I find this annoying and completely impractical, but that’s just me.  I will usually edit the General Tab and set the Formatting option to use Two or Three Columns.

 

 

Step 4: Create Quick Create Form

While still looking at the list of Forms, click New on the toolbar and select Quick Create Form

 

Modify the form by adding your 2 lookup fields to the form. Save and Close the new form.

 

Optional:  Probably a good idea to go ahead and publish your changes now if you haven’t already done so.

 

 

Tip: If you know what Source entity will be used a majority of the time, put the other entity lookup first on the form.  In this case, we are assuming that contacts will be added from the Account record, so we are showing the Contact lookup field first.  The Account field will already be filled in when the join record is quick created.

 

Step 5: Build a workflow to set the Name Field

Now that we have the basics setup, we need to set a Name for these new records.  This is the name that will show in any lookups to this entity, which the system uses by default.  We’ll build some views that will be used on forms, but we can’t just leave this field blank.  This workflow will set the Name to a combination of the Account and Contact names.

 

Note: Recall that in Step 1 we set the primary field requirement to Optional.  If you did not do that, now is the time.

 

Navigate in your Solution (or All Customizations) to Processes and click New.
  • Process Name: This should reflect the Entity Name, RT for ‘Real-Time’, and some description.  I like to call this one New to show that it runs when a new record is created.
  • Category = Workflow
  • Entity = Account To Contact
  • Run this workflow in the background should be Unchecked/Off

 

 

Click OK to create the new process.
Options for Automatic Processes
  • Scope = Organization
  • Start When =
    • After Record is Created
    • After Record Fields Change
      • Select Account and Contact (the custom join lookup fields added earlier)
  • In the logic area, add a step to Update Record
  • Click the Set Properties button next to the Update Record.
  • Click in the Name Field an add the dynamic values of Account and Contact.

 

Tip: Put some kind of delimiter like a dash, asterisk, or colons to separate the values.

 

 

Note:  This can be whatever dynamic values you like, but try to make it unique to this join record.  Remember that this will show up in all the lookups that may reference this join entity.

  • Save and Close the Properties
  • Activate the Workflow Process

Step 6: Customize a Form View

We are going to want to add a sub-grid to both entities we are joining to, so we need a view that shows the values.

 

At this point we haven’t modified any of the default views, but we will at least modify the Active Account to Contact view, then do a save as and modify that one for a form view.

 

  • Navigate in your solution (or All Customizations) to the Account to Contact entity.
  • Select Views from the left navigation tree
  • Select to edit the Active Account to Contact view
  • Add the Account and Contact lookups columns to the view, and order them as you prefer.
  • Save the view, but DO NOT CLOSE the window
  • Click Save As and enter Form View for the new view name
  • Remove the Name column from the view
  • Move the Contact name and the first column.Note: This is because we will use this view on the Account Form, so it makes sense for the Contacts to be listed first.

 

 

Option: If you like, you can do another Save As, call it Contact Form View, and set the Account as the first column.

 

Step 7: Add it as a sub-grid to a form

Now that we have our entity, we set the fields correctly, and we built a form view, we can put this all together on the Account form and see how it works!

 

  • Navigate in your solution (or All Customizations) to the Account entity.
  • Select Forms from the left navigation tree
  • Select the Account form of Form Type Main from the view
  • Scroll to the appropriate spot on the form where you want to display the new entity
  • Insert a Section to contain the Sub-grid
  • With the Section selected, insert a Sub-Grid
The key fields here are in the Data Source area.
  • Records = Only Related Records
  • Entity = Account to Contact (Account)
  • Default View = Form View

 

 

Note:  If/when you add this to Contact, select Contact Form View instead of Form View.
Save, Publish, and Save and Close the form changes.

 

Step 8: Test it!

Tip: If you haven’t recently done so, now is a good time to Publish All Customizations.  We’ve made a lot of changes so we want to be sure everything shows up for testing.
  • Navigate to an Account record
  • Scroll down to where you added the sub-grid and it should look something like this

 

To add a new Contact to relate to this account, click the + button.  You should see the Quick Create form we created earlier.

 

 

Note: If you don’t see this + button, or you get the regular entity form instead of the quick create form, you missed a step somewhere in setting up the entity and lookups. Check the following:
  • Does the Account to Contact Entity set to Allow Quick Create
  • Are the Account and Contact lookup fields set to Business Required
Enter a Contact name, and click save.  You should now see the new relationship added!

 

Step 9: Extend it!

What we’ve done so far has really been pretty much what you get out of the box for N:N relationships, with the exception of step 8 where we added the new relationship with a quick create form.  Adding those with the out of the box functionality is painful (just my opinion…).  Now we can get to extending this new entity to do something that the out of the box functionality does not.

 

For our example, we are going to add a Role attribute to the entity.
  • Navigate in your solution (or All Customizations) to the Account to Contact entity.
  • Select Fields from the left navigation tree
  • Click New to add a new Field
  • Display Name: Contact Role
  • Data Type: Option Set
    • Add Options for Role 1, 2,3,4
Note:  This is just for example purposes, you can add any field type as required by your needs.

 

Save and Close the new field

 

 

  • Select Forms from the left navigation
  • Select the Quick Create form type
  • Drag the new Contact Role field on to the form
  • Save and Close
  • Also edit the main Information form and add the Contact Role field, Save and Close
  • Select Views from the left navigation
  • Select the Form View(s)
  • Add the new Contact Role field to the view(s)
  • Save and Close
  • Publish your changes
Now when you view the data on your Account entity, you’ll see the new Role value assigned to each record.

 

Wrapup

We have covered a lot of different topics, but you should have a good grasp of the power of using this Join Entity concept instead of the out of the box many to many (N:N) relationship.  You gain auditing capability to see who, what, when.  You gain the ability to extend the relationship and add additional fields to define the relationship (e.g. roles, etc.).
You can also extend the entity to run workflow processes when records are created or modified.  We did a simple name update, but this could perform many other tasks if needed (e.g. send an email, update a related record, etc.)

 

 

A Taxonomy of Microsoft Security Services

I was having difficulty keeping up with all of the Microsoft security related products, services, features and nomenclature.  So I started this taxonomy.  What I found is that there can be multiple “product” names or brand names that apply across the same technology set.  It can get confusing.  This listing might be helpful in certain cases.  It has certainly helped me get my mind around what Microsoft has to offer.7 ways whitepaper

Fortunately, it turns out that it is not so difficult to match the right set of security services to your situation and need.  We do it all the time with customers.  It may just be easier than sorting out all of these names!

  • Azure Rights Management(ARM)
    • Policies and encryption
    • Includes:
      • Information Rights Management (IRM)
        • Document, library and message policy based data loss protection
      • Office 365 Message Encryption (OME)
        • Protected sharing via email and OneDrive
  • Azure Information Protection (AIP)
    • A broader label and product packaging over ARM
  • Azure Advanced Threat Analytics (ATA)
    • On premise solution
    • Uses Azure Machine Learning to adapt
  • Azure IaaS Security
    • Network Security Groups, VPN Gateway
    • Azure Storage Service Encryption
    • Azure Disk Encryption
    • Web Application Firewalls
    • Azure Monitor
    • User Defined Routes
    • Network Watcher
    • Azure Storage Account Keys
    • The following have an impact on security:
      • Azure Traffic Manager, Application Proxy
      • Azure Storage Analytics
      • Azure Backup and ASR
      • Remote Desktop Gateway
      • Azure Dev/Test Labs
    • Azure PaaS Security
      • Azure SQL Transparent data Encryption
      • Firewall, Connection Encryption
    • Azure Security Center
      • Monitoring of Azure resources
      • Full monitoring, threat detection, policy based platform for security in Azure
      • Application Whitelisting
      • Just-in-Time Network Access to VM’s
      • Machine Learning for Brute force detection and Outbound DDoS
      • Azure SQL Database Threat Detection
      • Integration with Partners : Fortinet, Cisco
    • OMS
      • Log reporting and alerting
      • Can collect Azure resource logs as well as on premise logs when connected to SCCM
      • Security & Compliance Solution
        • Security Compliance Manager
      • Update and Change Management
      • Antimalware Assessment
      • Active Directory and SQL Health Analysis
    • Azure Active Directory
      • Premium
      • B2C
      • Domain Services
      • Multi-factor Authentication (MFA)
    • Azure Key Vault
      • Hardware Security Models
      • SIEMS Export
    • Enterprise Mobility + Security (EM+S) (aka Enterprise Mobility Suite (EMS))
      • InTune for mobile device management
      • Azure Rights Management Services
      • Advanced Threat Analytics
      • Azure AD Premium
      • Remote Desktop Services
    • Office 365
      • Advanced Threat Protection
      • Security & Privacy Settings
        • Password policy
        • Customer Lockbox
        • Sharing
        • Self-service password reset
      • Security & Compliance
        • Cloud App Security (aka Advanced Security Management)
        • Threat management
        • Data Loss Prevention
        • Data Governance
        • Search & Investigation
        • Service Assurance/Compliance Reports

Using OMS and ASC for Threat Detection

Have you ever heard the phrase “Shoemaker’s kids go barefoot” or “Mechanic’s car never runs”? Well you can add a new one “IT consultant’s labs are insecure”. As a 25-year veteran of the IT industry I’m very familiar with limiting access and reducing the attack vectors for internet connected devices. I do this for customer’s ever day and have gotten pretty good at making sure bad actors cannot break into the systems I design and setup. As any good IT consultant, I also have access to a shared lab of servers to act as a sandbox for testing and understanding deployment scenarios. For convenience, the consultants at our company need to be able to access that lab from anywhere, anytime from any device. The lab is not something that contains anything of value or any customer data, so my thinking was to open it up for “convenience”. The lab has been running for many years with no issues, it was originally setup early on in Azure using ASM IaaS VMs.

Fast forward to May 2017 when I am attending the Azure Architect Bootcamp, 5 days packed full of more information than anyone should legally be allowed to consume. During the presentations, I was intrigued by the Capabilities of OMS and the analytics it captures. I was following along with the presenter for OMS when I noticed in service map that there were lots of “Terminal Services” connections to one of our lab machines from numerous external IP addresses that were not from our offices. The VMs were implemented with a classic Network Security Group that was allowing any-2-any connections over port 3389 to the machines. As soon as I saw the connections in OMS service map, which I had deployed in my lab the day earlier, I suspected a port scan or some type of intruder.

Screen shot

The next presenter started the presentation on Azure Security Center (ASC) so I switched over to Security Center and that is when I noticed this……… NSGs missing on subnets and VMs. The highlighted machines are production machines that are locked down separately, but all the others that start with “ISC365” are lab machines in this subscription and are rarely logged into.

Azure portal

I immediately logged into the lab system ISC365-AP1 to view the security event logs, and low and behold I was actively being attacked as every few seconds from an active connection guessing usernames and passwords.  This was some sort of password guessing bot using a database of well-known passwords, and even though we use strong passwords on the administrative accounts there is a chance some of the test user accounts could have known passwords.  Notice the number of security events in over 200 thousand entries so they had been doing this for a while.

Log Entries

I then went back to ASC to have it implement a NSG on the VNets to only allow RDP traffic from our offices. The time was 5:10EST and within three minutes the attack stopped dead in its tracks. As you can see below nothing happened after that time and I was relieved.

Audit failure

After a refresh of the screen ASC reported it as resolved from the actions it tool. I am impressed as how well it detected and remediated this, and that is hard to do.

Resolved screen shot

As IT professionals, we are asked to do so much with so little for so long that management thinks we can do anything with nothing forever. Project deadlines and business drivers are asking us to do more and more every day, but most companies don’t invest in threat detection or remediation software until there is an “event”.  For Azure, it’s baked into the platform and can be implemented in a way that secures resources by default and audits those resources over time. For all of those IT professionals out there who are apprehensive of using Azure because of security concerns, I say to you that Azure gives you the tools you need to implement practical security measures. Given their deep pockets and laser focus on security I believe Microsoft will move Azure to be more secure that any on-premise implementation even if the consultants miss something.

Related articles:

Security and Compliance

Information Protection

 

FDLE CJIS Audit of Azure and Office 365 Completed

Last week, Microsoft announced (https://blogs.msdn.microsoft.com/azuregov/2017/07/14/florida-finalizes-cjis-technical-audit-on-microsoft-government-cloud-services/) that a Florida Department of Law Enforcement audit of Microsoft Cloud platforms regarding compliance with the FBI Criminal Justice Services Security Policy was completed.  Criminal Justice Information Services (CJIS)

The ramifications of the announcement are somewhat unclear, but reading between the lines, this is fantastic news!  I think it means that soon, State of Florida agencies will be able to consider Azure and Office 365 when tracking CJIS regulated law enforcement data.

« Older posts