C# Intro for Powershellers

Let me begin by stating that I am not a programmer or C# expert.  I am a System engineer by trade, who has become proficient in Powershell scripting.  When I first began working in IT, about 13 years ago, the first programming language that I tried to learn was Perl.  At the beginning of that process, I immediately discovered programming is not for me and this relationship will not work.  So I dumped Perl after a few “dates” and never looked back. However, that was then.  Now, after having worked with Powershell to create scripts to automate tasks, I have developed a renewed interest in coding.  According to Jeffrey Snover, the inventor of Powershell, the intent behind Powershell was for it to be a glide path to C#.  I guess you can say, mission accomplished.

In this article, I’m going to create a basic C# application in Visual Studio while using Powershell as reference to explain some of the concepts in C#.  As the saying goes, use what you already know to learn new things.  However, let’s review a few basics first.

Before we dive into C# coding, it’s necessary that we briefly cover the essential elements related to the .Net Framework architecture. There are two main components of .Net Framework – Common Language Runtime (CLR) and Class Library. In a nutshell, the CLR is responsible for converting the compiled code of C# into the native code of the underlying machine.  C# code is initially compiled into a format called Intermediate Language code (IL). This IL code is consumed by the CLR and converted into a format which is comprehensible to the underlying machine. This process flow is depicted in Figure 1.

The second component which makes up the .Net framework architecture, the Class Library, is a set of namespaces, classes, interfaces and types that are used by the application to access the systems functionality.  Here’s a small subset of the .Net Class Library in Figure 2.

Figure 2

Due to the number of classes, they need to be organized in a manner that is based on their function.  This is the purpose of a Namespace – a container of related classes. In Figure 3 below, we see illustrated the structure of a .Net Framework namespace called “System”.  The System namespace consists of multiple classes such as Console, DateTime, Math and many more not presented in the diagram. Powershell commands and C# code use namespaces to produce and manipulate instantiated instances (objects) of these classes, which contains various methods and properties.

As mentioned, within each namespace are classes  A class (Figure 4) is a container that consists of various methods and types.  Methods allow you to perform certain actions against an object referenced by a class and a type describes its properties.  In essence, a class can be seen as the blueprint used to build objects. For Instance, if you have a class called “Car”, it may have the following type categories:  make, model, color, year, etc. Also, it will have methods which determine the actions that can be performed against a “Car” object such as: start(), drive(), stop() and so on.  The types and methods of a class are known as the class members.

To build a fully functional C# application, we must assemble all of the necessary components that work in conjunction to produce a desired result.  For instance, you may have an application that may need to output to the screen or work with a collection of data. These tasks will require you to make the System and System.Collections namespaces accessible to your application, along with the relevant classes.  As the number of namespaces that your application uses increase, they will need to be organized into a unit called an “Assembly”. This is basically a DLL or EXE file (Figure 5).

Figure 5  C# Assembly (DLL or EXE)

Now that we have completed our review of the .Net Framework architecture and components, let’s proceed to doing some basic C# coding.


For this article, I will create a C# application that will retrieve the current date/time and output the results to the console screen.


As you already know, in Powershell this is simply done by executing the “Get-Datecommand shown in Figure 6.

Figure 6

Another important point of significance is to know how Powershell reaches into .Net Framework to produce this output.  If we pipe this command to “Get-Member”, we will see the TypeName or namespace and class the Get-Date command is referencing.  In Figure 5, we see it’s using the DateTime class which is a member of the “System” namespace. Therefore, our C# code must also use the System.DateTime class to achieve the same result.  For non-seasoned Powershellers, I omitted additional output produced by Get-Member in the Figure 7 screenshot. The Get-Member command also displays the methods and properties of objectified output that is passed to it via the pipeline.

Figure 7


For this example, I will create a C# application that will retrieve the current date/time and output the results to the console screen.  I will be using Visual Studio Community 2017 to accomplish this task.

First, open Visual Studio and go to File > New > Project (CTRL + Shift + N) as shown below in Figure 8.

Figure 8

In the window that appears (Figure 9), expand Visual C#, select > .Net Core and then choose Console App (.Net Core).

Figure 9

Notice you are given several settings to configure:

  • Name:  Give your Project a name.  I will call mine DateApp1.
  • Location:  Choose a location to store your project (or use the default location).
  • Solution:  You can create a new solution or use and existing one.  A solution contains the project and build information. Also, it can have multiple projects.
  • Solution Name:  By default, this will be the same name you give the Project.  You can give it a different name but I will leave the default.
  • Create new Git repository:  Check this box to create a Git repository.  I will leave it unchecked.

Once all the settings are configured, hit “OK”.  You will now see below in Figure 10, a window with a basic structure of your C# program.

Figure 10

Let me explain the above:  By default, certain namespaces are automatically added to a new program, each of which is preceded by the “using” keyword.  But remember, for our project we will only need to use the “System” namespace. It contains the class (DateTime) that our application requires.  Also, notice that a namespace is created that matches the name of the project, as well as a class called “Program”. Both can be changed to a different name.  The final piece added is a static method that can accept multiple string parameters. It has a “void” keyword, meaning it doesn’t return a value. A static method is one that can be called without creating an instance of the class it’s a member of.  Look to the following Powershell examples to provide further explanation:

Figure 11  Powershell static method of the
DateTime class to add 5 days to the current date

Figure 12 Powershell instance method of the
DateTime class variable ($date) to add 5 days
to the current date

Figure 13 shows the structure of our DateTime app after removing the unused .Net namespaces, changing the Class name and adding the code to produce the current date on the screen.  To achieve this, create a variable called “Now” and assign it to the DateTime.Now property. In C#, a variable is created by preceding the variable name with the name of the .Net class it’s using. And finally, add the Console.Writeline method with a “Now” argument. Also, notice the “System” namespace is now highlighted, which indicates it’s being used.

Figure 13

To compile and execute the above code, press CTRL + F5.  If the build is successful, you will get a pop-up Command window showing the current date and time(Figure 14).  By default, it’s displayed in a long date and time format.

Figure 14

Visual Studio will also show output detailing the Build results.  It displays the location of the application executable file and if the Build succeeded or failed.

Figure 15

In summary, in this article I explained the fundamentals of .Net Framework and created a basic C# application that outputs the current date and time to the console.


MyIgnite – Reflections on Microsoft Ignite 2018

I attended this year’s Microsoft Ignite conference in Orlando, FL and decided i would provide my reflections on the event.  The annual conference provides a plethora of sessions on Microsoft technology offerings and solutions related to Microsoft 365, IoT, containers, DevOps, Team collaboration, Azure services and more.  Also, there’s an Expo of various IT vendors; panel discussions on Diversity in IT; and hands on labs to provide IT skill development.  It’s a huge event with attendees in all walks of IT from around the world.

The conference kicked off with a Keynote address from Microsoft CEO Satya Nadella.  In his opening speech, he outlined Microsoft’s vision of the next generation in IT.  This involves solutions which revolve around an Intelligent cloud and edge that transform products (business apps, gaming, infrastructure, etc.) and how IT organizations design their operations.  Traditionally, IT has been slow to adopt new technologies due to security concerns and policies.  Also, some IT shops are still afraid of the cloud and the perceived risks that it presents to business information.  However, this posture is no longer viable as users and business partners must be given the flexibility to be productive from any device and any location.

What’s new at Microsoft?  Two new features introduced at Ignite are Ideas and a refined Microsoft Search.  Ideas is cool feature that uses AI to predict what a user will do and can offer a set of design suggestions when creating a PowerPoint presentation.  For instance, if you are designing a PowerPoint slide, Ideas will suggest a particular graphic image based on the written text in the slide.  It can also find content inconsistencies such as a particular word spelled differently and offer to remediate the differences.  You must have OfficePro Plus to use this feature since it leverages the AI capabilities in the cloud.  It was announced that Microsoft Search has now been expanded to search across all Office products and device types.  By using Microsoft Graph and Bing, It intelligently provides customized results based on previous activities and work.

Also, Microsoft 365 now has a new Admin center.  As an improvement to the Office 365 Admin Center, it offers a more focused and centralized workplace for managing and securing resources in Microsoft’s cloud ecosystem.  If you are a Security Administrator for your organization, the Microsoft 365 Admin Center has an HTTP endpoint called security.microsoft.com that is a custom portal for security related responsibilities such as DLP, document classification and permissions.  Also, there’s an endpoint called admin.microsoft.com for managing users, groups and resources.  This approach falls in line with the concept of Just Enough Administration (JEA).

Microsoft is clearly implementing a full court press towards a wider adoption of Azure and Office365.  A majority of the sessions are related to Azure cloud platform and its myriad of offerings.  On-Premise enterprise applications such as Exchange server may not be dead, but they are definitely on the endangered list. At past conferences, there would have been a variety of sessions around On-Premise Exchange and related features, particular in a year which has a new release of Exchange server.  Not so this year.  There were a couple session devoted towards Exchange 2019, which is currently in Preview.  Although, the handwriting has been on the wall for several years that the focal point of messaging is the cloud.

In addition to technical skill development of staff, a very important part of IT is creating a work environment that is free of sexual harassment and racial biases.  The IT field is very male dominated and an unspoken reality is that it’s often a toxic world for women.  It was good to see in the session lineup several discussions highlighting the challenges that women face in IT; how to overcome biases; or creating a more inclusive workplace.

Microsoft has announced that Ignite will be held in Orlando again in 2019.  However, it will be the first week of November as opposed to the last week of September.  This will mark the third year in a row that Ignite will be in Orlando.  Although it’s nice to be able to visit different cities, Orlando is great location for the conference, which has nearly 30,000 attendees.  The weather is great, it’s close to Disney theme parks (the closing celebration was at Universal Studios) and it’s not congested like other major cities.

Those are my thoughts and takeaways.  What stood out to you about this year’s Ignite?







Azure AD Attribute Hide and Seek

Azure Ad Connect provides organizations with the ability to synchronize their On-premise users and groups to Azure Active Directory.  When synchronizing objects to Azure, administrators have the ability to control which users or groups are synchronized to the cloud.    Furthermore, it’s also possible to select which user or group attributes are synchronized.  Some organizations may have Security policies that prohibit certain information, such as phone numbers and addresses, from appearing in the cloud.  Luckily, attributes can be easily filtered by unchecking the attribute on the AD connector object in Synchronization Service Manager.  However, what if there’s an attribute that is being synced, but does not appear on the Azure AD connector as a filterable option?  Here’s an example that shows you how to deal with that.

Lets take a look at a user called TesterB in Powershell.  Using the Azure Powershell module (or Azure Cloud shell), we can get the user object and its properties with the following command.  Notice that the City attribute for our user is set to New York.

We don’t want location information available in Azure AD.  Lets logon to the Azure AD connect sever and open Synchronization Service Manager to filter this attribute.  Once there, click on the Connectors button.  You will see two connectors:  one for Azure AD and the other for On-premise AD.  Select the On-premise AD connector.

On the Properties window for the AD connector, click on “Select Attributes” to see the list of attributes that are available and being synchronized to Azure.

As shown below in the AD connector attributes window, there isn’t a “City” attribute.  Also, the attributes with a check mark are being synced to Azure AD.  This view shows the ldap name for each attribute, which is not always the same as its Display name, which is what the user property showed above in Powershell. To get to the bottom of this, we will need to look at the Attributes Editor for the user object in On-premise AD.

Open the TesterB user in ADUC and go to the Attribute Editor tab.  There you will see a list of the attributes that are available.  This view shows the ldap name for the attribute and its value, if one is set.  The ldap name for City is “l”, since the value is New York.

Now if you go back to the AD connector for verification, you will notice the attribute “l” is checked.  This will need to be unchecked.

Once you uncheck it and save the change, run the following command in Powershell to remove the City information from users in Azure AD and prevent it from being synced in the future.

A quick look at the City property for TesterB shows the location is no longer displayed.

That’s it!  If you ever have a situation where you can’t find an attribute to filter on the Azure AD connector, remember it probably has a ldap name that is different from the display name.





A Guide to Passing Azure Exam 70-533

Back in April of this year, I passed Azure exam 70-533:  Implementing Microsoft Azure Infrastructure Solutions.  To be honest, this was actually my second attempt at the exam.  I failed on my first try about three weeks earlier.  But who’s counting?   All that matters is that I persisted and eventually passed.  I’m not mentioning this to be discouraging to anyone intending to take the exam.  However, my intention is to provide encouragement if you don’t pass the first time around.  No one likes seeing the word “Fail” on the exam printout, but it’s not the end of the world.  With that being said, I thought I would write an article outlining the methods I employed to prepare for the test.

Practical Experience

First and foremost, you will need hands-on experience to pass this test.  Azure exam 70-533 is not easy and cannot be passed solely on reading books or articles.  If you do not have access to Azure through your employer or a Visual Studio subscription, Microsoft offers a 30-day free trial, which comes with a $200 credit.  The free trial allows you to create resources in Azure such as VM’s, vrtual networks, storage accounts, web apps, containers, etc.

Once you setup your account, it’s important to have a strategy to learning the skills that are needed to pass the exam.  Microsoft has a list of objectives and related skills that are covered by the exam.  As of this writing, the objectives were last updated on March 29, 2018.  Under each category of objectives are a number of relevant tasks or exercises.  Go to the exam site and do exercises around all the listed skill areas.  Microsoft has excellent documentation that will help you develop the skills measured by exam 70-533.  Also, it’s very important to learn how to accomplish tasks using Powershell and ARM templates, instead of only in the Portal.  For instance, learn how to deploy VM’s and related resources from a script or template.  Perform all of the tasks until you feel you have mastered them.

Training Courses

Pluralsight courses were an asset that proved to be a critical component of my training.  This site offers a number courses that cover topics such as Azure infrastructure solutions, storage, networking, application services, ARM templates, Identity management and more.  Also, there is a learning path for exam 70-533 that consists of about 7 or 8 course.  The training material is excellent, and consists of demos and exercise files that provide some practical training.  Pluralsight courses will give you a solid foundation.  Additionally, a monthly Pluralsight subscription will cost you $29.  The site is more than worth the price.  Another site that was helpful is Cloud Ranger.  The courses are free but many of them are now outdated since they are designed around the old Classic Model.

Practice Exam

I would advise you to get the official Measureup practice exams from Mindhub.  Some of the questions are on the Classic model, however the exam was still very helpful.  The real exam is all ARM, nothing on the Classic model.  The Measureup practice exam provides the option of taking the test in Practice mode, which is a customizable format.  For instance, you can select questions from a particular objective, or only questions that you missed during the last practice exam.  A huge benefit with the practice test is that it offers explanations for why an answer is correct and the others are wrong.  Also, each answer has links to documents that are relevant to each question.  DO NOT memorize the answer; know why an answer is correct.  I retook the full Practice exam (nearly 200 questions) until I consistently passed with at least a 95%.  At this point, I moved on to taking the practice test in Exam mode.  Mindhub currently has a special that offers an exam voucher, the practice test and 2 retakes for $266.00.

Helpful Links

The Exam 70-533 reference book has not been updated for awhile, but this site has tips that were extracted from the book’s content.  These bullet points are important facts that you will need to remember for the exam.  Also, make sure you know the features and pricing with app service plans and SQL database service tiers.

I hope the information I provided was beneficial and will contribute towards you passing exam 70-533.  Good luck!








Powershell Function to Get Messages

If you are an Exchange server administrator, you more than likely spend a fair amount of time searching the Message Tracking logs.  The data provided by these logs can be helpful in finding all messages with a particular subject or sent by a certain user during a specific time frame.  Of course, there are two ways to search the Message logs:  the Exchange Admin Center (EAC) or Exchange Management Shell.  Using the GUI is perfectly fine, if that is your preference.  However, if you are having to perform searches on a regular basis, the EMS is the more efficient option.

To help make your job and mine alot easier, I wrote this Powershell function that can be used to find messages based on various criteria.  Check it out on Github, and let me know what you think.




No Reminders

We have very busy work and personal lives, which can easily involve several meetings in the span of one week.  Outlook reminders serve an important role in helping to manage scheduled appointments and meetings.  Although, it’s not sufficient to have a slot on the calendar for that demo scheduled next week with a software vendor.    This is where Reminders come to the rescue of human memory shortcomings.  A reminder will popup and say  “your demo session with Vendor X starts in 15 minutes”.  That’s not what it actually says, but you get the point.

Recently, I received a request from someone to have reminders disabled for all calendar items.  Personally, I couldn’t survive without having reminders for upcoming events, but to each her own.  This is a very simple request to fulfill, right?  Just go into Outlook and click File > Options > Calendar Options to uncheck Default Reminders.  After performing these steps, calendar reminders should be disabled…at least I thought.  After making this change, reminders were still enabled by default.

To determine why reminders were still enabled on meeting invites, we have to turn our attention to the mailbox settings in Exchange.  Specifically, we must examine the properties of the mailbox in the Exchange Management shell (EMS) to get the answer.  Since the EMS is technically Powershell just customized with a different look and loaded with Exchange cmdlets, we can use it to get the properties and methods of any object.  Naturally, one would think that a mailbox object would contain a property which contains settings for calendar reminders.  Let’s see.  In the EMS, I will run the following command to see the properties of my mailbox:

Get-Mailbox -Identity Burrs | Get-Member

The output of this command consists of a long list of properties, however the only calendar related items are:

Hmmm…not exactly what we are looking for.

If the Reminder settings are not configured by a property on the Get-Mailbox cmdlet, we must determine where that setting exists.  Let’s check in the EMS by searching for any cmdlets that are related to Calendar configuration. The following search query will reveal any cmdlets that contain the noun calendar:

Get-Command *calendar*

The output of this query shows something interesting.  It includes a command called “Get-MailboxCalendarConfiguration”.  Let’s take a look at the properties of this command to see what we are able to configure.  To do this, we must include the identity parameter in the command syntax to pipe to “Get-Member”.  I will use my mailbox name as the Identifier.

Get-MailboxCalendarConfiguration -Identity Burrs | Get-Member

Bingo!  One of the properties available for the “”Get-MailboxCalendarConfiguration” command is called “RemindersEnabled”.  Also, you can see included in the properties definition  is “{get;set;}”.  This means we can apply a value to the RemindersEnabled property using the verb “Set”.

Now we have something to work with.  If we take a look at the calendar configuration settings of my mailbox, here’s what we see:

Get-MailboxCalendarConfiguration -Identity Burrs | Format-List

It appears that reminders are enabled on my mailbox with the DefaultReminderTime being set to 15 minutes.  Here’s how he we disable reminders:

Set-MailboxCalendarConfiguration -Identity Burrs -RemindersEnabled:$false -DefaultReminderTime 00:00:00

After executing the above command, reminders are now disabled for my mailbox.