Archive

Archive for May, 2010

Winmail.dat received with a Lotus Notes Client

When an Exchange based email organization sends out an email to a non Exchange based organization (Lotus Notes or other systems for example web mailers) it could happen, that a non mapi client receives an empty email with a winmail.dat file attachment.

The client is unable to open the winmail.dat file and so the email is useless for him.

The reason for this is, that the Microsoft internal TNEF alias RTF (Rich Text Format) was sent out of the Exchange organization. There are different places and settings which have an influence, which you may need to check:


1. Domain level (Exchange server settings):

 

a) Exchange 2003

 In the ESM (Exchange System Manager) go to, Global Settings, Internet Message Format, select the domain. Default domain is “*” but please configure an extra entry for the external SMTP domain for example dominodomain.com.

If you open the property of the domain and go to advanced, there is a setting “Exchange richt-text format”. You can set it to “Always use”, “Never use” and to “Determined by individual user settings”.

Please set it to never use

 

 

b) For Exchange 2007:

Powershell:

Set-remotedomain domainname tnefenabled $false

In the GUI:

Open the Exchange management console, Organization, Hub Transport, Remote Domains, Domain name:

Please set it to never use.

 

   

2. Recipient level

 

Another place where you can configure RTF is at the recipient level.  

a) AD contact

If you got an active directory based contact you can set it to be a non mapi recipient:

 

 

Please uncheck the „Use Mapi rich text format) box

Note: For an AD contact there is one additional setting which may need to be set:

InternetEncoding: 1310720

See KB 924240 for more details.

KB 924240        The email message attached with winmail.dat when you send a mail from Exchange to Lotus Notes through a SMTP connector

http://support.microsoft.com/default.aspx?scid=kb;EN-US;924240 

b) For Exchange 2007:

Open the Exchange management console, Recipient Configuration, Mail Contact:

 

 

c) The same is true for a personal contact created in Outlook:

 

  

d) Even for a one off addresses this can be set. If you enter the smtp address in the to line for example john.doe@domain.com then do a check names, double click on the email address in the to line,

and you will see the same box as before:

 

 

3) Other important Outlook settings:

 

a) Message Format :

Tools, Options, Mail Format, Message Format (Compose in this message format):

Plain Text, Rich Text, HTML

 

 

b) Internet Format:

Tools, Options, Mail Format, Internet Format, Outlook Rich Text options:

When sending Outlook Rich Text messages to Internet recipients, use this format:

       Convert to Plain Text format

       Convert to HTML format

–    Convert to Rich Text format

 

 

4. Outlook overwriting the Exchange settings:

 

329454 Outlook 2002 message format overrides Exchange server message format

http://support.microsoft.com/default.aspx?scid=kb;EN-US;329454

HKEY_CURRENT_USER\Software\Microsoft\Office\11.0\Outlook\Options\Mail DWORD:
DisableInetOverride Value: 1

When the registry key is set to a non-zero number, the message format does not override the Internet Message Format that is specified by Exchange.

The key DisableInetOverride still exists in Outlook 2003, 2007 and 2010.

 

5. NK2 cache

 

If you have Outlook clients where it sometimes work and sometimes not please delete
the name cache files from the outlook clients (delete username.NK2 file(s))

 

6. More to read:

 

821750 How to configure Internet e-mail message formats at the user and the domain levels in Exchange Server 2003

http://support.microsoft.com/default.aspx?scid=kb;EN-US;821750

290809 How e-mail message formats affect Internet e-mail messages in Outlook

http://support.microsoft.com/default.aspx?scid=kb;EN-US;290809

 

The May 2010 Security Runtime Engine Preview is now available on CodePlex

May 27th, 2010 No comments

The WPL site on CodePlex now has the May CTP code only release for the Web Protection Library and a Word document introducing the new extensibility points for the Security Runtime Engine.

We haven’t released binaries because it’s just a preview, it is in no way ready for production.

So why make the source available? We want feedback. This represents a rewrite of the Security Runtime and a new way for you to easily write plug-ins for it. Rather than simply decide what’s best for our users we wanted to show you the direction we’re taking and give you a change to influence it.

Missing from the sure are any tests – they do exist and will be published in the next source drop. There are no inspectors or logging plug-ins – because we would like you to work through the tutorial, then look through the code, think about how you would use them and try to write your own. If we supplied some example inspectors the temptation would be there to use those as a starting template for your own.

So please download, read the source, play around (on a test web site) and do leave feedback and bug reports/issues for us on CodePlex.

Barry

Categories: Uncategorized Tags:

The May 2010 Security Runtime Engine Preview is now available on CodePlex

May 27th, 2010 No comments

The WPL site on CodePlex now has the May CTP code only release for the Web Protection Library and a Word document introducing the new extensibility points for the Security Runtime Engine.

We haven’t released binaries because it’s just a preview, it is in no way ready for production.

So why make the source available? We want feedback. This represents a rewrite of the Security Runtime and a new way for you to easily write plug-ins for it. Rather than simply decide what’s best for our users we wanted to show you the direction we’re taking and give you a change to influence it.

Missing from the sure are any tests – they do exist and will be published in the next source drop. There are no inspectors or logging plug-ins – because we would like you to work through the tutorial, then look through the code, think about how you would use them and try to write your own. If we supplied some example inspectors the temptation would be there to use those as a starting template for your own.

So please download, read the source, play around (on a test web site) and do leave feedback and bug reports/issues for us on CodePlex.

Barry

Categories: Uncategorized Tags:

The May 2010 Security Runtime Engine Preview is now available on CodePlex

May 27th, 2010 Comments off

The WPL site on CodePlex now has the May CTP code only release for the Web Protection Library and a Word document introducing the new extensibility points for the Security Runtime Engine.

We haven’t released binaries because it’s just a preview, it is in no way ready for production.

So why make the source available? We want feedback. This represents a rewrite of the Security Runtime and a new way for you to easily write plug-ins for it. Rather than simply decide what’s best for our users we wanted to show you the direction we’re taking and give you a change to influence it.

Missing from the sure are any tests – they do exist and will be published in the next source drop. There are no inspectors or logging plug-ins – because we would like you to work through the tutorial, then look through the code, think about how you would use them and try to write your own. If we supplied some example inspectors the temptation would be there to use those as a starting template for your own.

So please download, read the source, play around (on a test web site) and do leave feedback and bug reports/issues for us on CodePlex.

Barry

Categories: Uncategorized Tags:

The May 2010 Security Runtime Engine Preview is now available on CodePlex

May 27th, 2010 No comments

The WPL site on CodePlex now has the May CTP code only release for the Web Protection Library and a Word document introducing the new extensibility points for the Security Runtime Engine.

We haven’t released binaries because it’s just a preview, it is in no way ready for production.

So why make the source available? We want feedback. This represents a rewrite of the Security Runtime and a new way for you to easily write plug-ins for it. Rather than simply decide what’s best for our users we wanted to show you the direction we’re taking and give you a change to influence it.

Missing from the sure are any tests – they do exist and will be published in the next source drop. There are no inspectors or logging plug-ins – because we would like you to work through the tutorial, then look through the code, think about how you would use them and try to write your own. If we supplied some example inspectors the temptation would be there to use those as a starting template for your own.

So please download, read the source, play around (on a test web site) and do leave feedback and bug reports/issues for us on CodePlex.

Barry

Categories: Uncategorized Tags:

App-V 4.5 SP2 now available

May 25th, 2010 No comments

Microsoft Application Virtualization (App-V) with Windows 7, Windows Server, and Office 2010 delivers a seamless user experience, streamlined application deployment and simplified application management.  App-V helps transform applications into centrally managed virtual services to reduce the cost of application deployment, eliminate application conflicts and reboots, simplify your base image footprint to expedite PC provisioning, and increase user productivity.

App-V 4.5 Service Pack 2  provides the latest updates to Microsoft Application Virtualization 4.5 code line. This is the first time the team has delivered via Windwows Update App-V 4.5 SP2 introduces:

  • Enhanced failover protection or disaster recovery of your virtual application infrastructure: App-V data-store failover protection enables administrators to quickly recover from disasters and/or recycle servers for maintenance.
  • Enable highly available application infrastructure: App-V 4.5 SP2 load balanced management servers can now leverage SQL server mirrored data-store to support high availability scenarios for line of business applications; with automatic failover protection not available with the previous versions of App-V. 
  • Data replication is now possible across geography: this enables organizations to recover from site wide failures faster.
  • App-V 4.5 SP2 clients can now deploy Office 2010.

To learn more about this release please view the App-V 4.5 SP2 release notes and FAQ.

App-V 4.5 SP2 can be deployed immediately to production and is available via Microsoft Update to MDOP customers and through Microsoft Volume Licensing Site (MVLS).  If you want to evaluate MDOP, the MDOP software is available at MSDN and TechNet.

To learn more about App-V you can visit the MDOP homepage and App-V TechCenter.

“Vail” Launchpad and Its Extensibility

May 24th, 2010 No comments

Hi there! We are on to our second edition of Vail Engineering blogs, and this time we are talking about Launchpad – what it is, how it can be extended and why developers should care about it. I aso want to point out that we got pretty good response to Vail SDK with some of our MVPs covering it pretty well by now. You can check out some of these very informational posts here:

http://asoftblog.wordpress.com/2010/05/05/developing-an-add-in-for-vail/

http://blog.tentaclesoftware.com/archive/2010/05/07/89.aspx

http://blog.tentaclesoftware.com/archive/2010/05/08/90.aspx

What is Launchpad?

Launchpad is a light weight and extensible client-based user interface that we built for Vail. It was born out of a couple of pain points that our customers experienced from Home Server v1. While Home Server v1 provided the ability for developers to add what we call ‘administrative’ or ‘server management’ tasks to the Admin console, it did not provide any means by which a day-to-day or non-administrative task could be presented to users in a coherent manner that resonates its association with Home Server. As a result we started seeing add-ins for day-to-day consumption of home server capabilities that were deployed to Admin Console, but did not belong there since they were not administrative tasks. We realized that there is a need for providing a coherent and consistent grouping as well as entry point for home server related tasks that everyone in the household can perform from their client PCs. This was the first pain point.

The second one, and perhaps the more significant one of the two, was the limitation around having matching usernames and passwords on the server and the PCs. If you recall, in Home Server v1 we require users to create user accounts on the server that had the same username and password as that of the client PCs so that they can seamlessly access the shared folders on the server as soon as they login to their PCs. This generated lot of confusion with consumers, as was evident from the feedback that we got. With Vail, Launchpad acts as the login UI for signing the user onto the server, thereby granting them access to the Server shares and other platform services exposed via the SDK. We no longer have the requirement to have the user accounts matching on server and client, instead users can use Launchpad to ‘sign-in’ to the server with any user account and password combination that was set up in Dashboard!

    

In short, Launchpad serves the following purposes:

  1. It is the entry point for the day-to-day tasks related to Windows Home Server from the client PCs.
  2. It eiminates the need for matching usernames and passwords setup between server and client, and eliminates the password sync dialogs.
  3. It Provides a logical and centralized location where all home server related tasks are exposed, resulting in much better awareness of home server and its capabilities.
  4. It allows everyone in the household to have visibility to developers’ add-ins, than just home server administrators.

Why should developers care about Launchpad?

So far, home server add-ins or applications were focused on ‘Administrative’ kind of tasks that extended the Admin Console. The audience for such add ins were limited to one person in the house hold, most possibly the head of the house hold who does the ‘Administrative tasks’ on the computers. With Launchpad, we now have the ability to create end-to-end add-ins with user interfaces targeted at everyone in the home who uses a PC joined to home server. A typical example can be an addin providing the ability for everyone in the home to sync a folder on their PC to home server, and then subsequently to the cloud. The launch point for a configuration UI for adding or removing folders included in the auto-sync scenario above (which is specific to the user’s PC) would be Launchpad, and not Dashboard.

As you can see from the example this is an opportunity for developers to create add-ins with multiple facets – one server side component targeting the administrator and one client side component targeting everyone in the home. The result is more people using your add-ins and more word getting spread about your product/addin. With our add-in deployment mechanism, you can package both these components together and we’ll take care of deploying and installing the relevant pieces on the server and client appropriately as well (more on this in a later post). So, as you can see, we have built a powerful SDK for developers to build a truly end-to-end add-in spanning the client, server and the cloud.

 When to extend Launchpad and when not to

Just so that we give a clear guidance on extensibility of Launchpad vs Dashboard, I am going to call this out specifically here.

You extend Launchpad when…

  1. You have a task or resource/UI that you expect everyone in the household to access/ use. Eg: Backup my PC, access shared folders etc…
  2. The task IS NOT related to the management or administration of the Server.
  3. You DO NOT need Administrator privileges on the server to do the task.

You extend Dashboard when…

  1. You have a task or resource/UI that you expect only the head of the household (home Admin – typically the person who sets up Home Server) to access/use. Eg: Add a hard drive, create user account etc..
  2. The task IS related to the management or administration of the Server, and not a day-to-day one.
  3. You DO need Administrator privileges on the server to do the task.

When in doubt, please do not hesitate to reach out to us.

Extending Launchpad

 Adding entries to Launchpad

You can add entries to Launchpad to point to a client application that makes use of Home Server in one way or the other. Your entries will appear under a category called ‘Addins’ on the main page of Launchpad.

Adding categories to Launchpad

If you want to add multiple entries to the Launchpad UI, we recommend grouping them under categories. Categories can be added upto three levels deep.

Example:

Addins-> (Built-in category)

              Company-> (Your category)

                       Antivirus -> (Your sub category)

                                  System Scan (entry)

                                  Scan Schedule (entry)

                       Online Backup-> (Your sub category)

                                  Backup Now (entry)

                                  Backup Settings (entry)

 

Enhances coming in future builds

In the later builds, we are looking at adding capability for targeting Launchpad tasks to specific users who are part of a User Group on the Server. For example, you can target only users who are part of ‘Remote Access Group’ to see a link to your remote portal hosted in Home Server. We are also making it so that Launchpad automatically authenticates the machine to home server using the username and password stored, if the user choses to do so. So, as soon as the user logs into the local machine, they are authenticated to Home Server so that all the services that require authentication to server work seamlessly. Another enhancement that is coming is the ability to control the alerts that are seen from the tray icon. User would be able to choose from three options – No alerts, network alerts or local & network alerts. On top of that you’ll see a lot more in the look and feel for Launchpad when we ship!

That’s it for today. As always, we welcome your feedback, comments and suggestions!

XPath to generate a list of NTLM authentications on Windows Vista or Later

May 13th, 2010 No comments

Hi Everyone,


Sas sent me an email complaining that I am not posting as often as I should- sorry about that.  I am working on a different project now but I am still in close touch with the auditing team and I’ll try to do better.


Anyway a question that I hear regularly is, “how do I find all the NTLM authentications on my network”?


Other than running a network trace, the best way I have found (ok invented 🙂  to do this is to look at the logon events in the audit log.


One of the changes we made to the logon events in Windows Vista (and therefore subsequent releases of Windows) was to include the NTLM protocol level in the logon events, if the NTLM auth package was used.


Now, with the new EventLog ecosystem, it’s easy to generate some XPath to find just these events.


Here’s the query:







*[System


   [Provider


     [@Name=’Microsoft-Windows-Security-Auditing’]


       and Task = 12544


       and (band(Keywords,9007199254740992))


       and (EventID=4624)


   ]


   and


   EventData


     [Data


       [@Name=’LmPackageName’] != ‘-‘


     ]



 ]


 


To use this in Event Viewer:



  1. Find the Security log under Windows Logs in the tree pane.

  2. Right-click the Security log, and choose “Filter Current Log…”

  3. Select the “XML” tab.

  4. Check the “Edit query manually” box.

  5. Replace the default query (“*”, or everything in the “<Select>” element), with the text in the box above.  I’ve formatted it for readability.

  6. Click OK

The event view will now be filtered and you’ll only see NTLM logon events.  Additionally, each filtered event will contain a “Detailed Authentication Information” section containing the protocol level (e.g. LM, NTLM, NTLM V2) in the “Package Name” field, and the session key length, if one was negotiated.







Detailed Authentication Information:
            Logon Process: NtLmSsp
            Authentication Package: NTLM
            Transited Services: –
            Package Name (NTLM only): NTLM V2
            Key Length: 128


 

Categories: Descriptions, Tips, Tools Tags:

XPath to generate a list of NTLM authentications on Windows Vista or Later

May 13th, 2010 No comments

Hi Everyone,


Sas sent me an email complaining that I am not posting as often as I should- sorry about that.  I am working on a different project now but I am still in close touch with the auditing team and I’ll try to do better.


Anyway a question that I hear regularly is, “how do I find all the NTLM authentications on my network”?


Other than running a network trace, the best way I have found (ok invented 🙂  to do this is to look at the logon events in the audit log.


One of the changes we made to the logon events in Windows Vista (and therefore subsequent releases of Windows) was to include the NTLM protocol level in the logon events, if the NTLM auth package was used.


Now, with the new EventLog ecosystem, it’s easy to generate some XPath to find just these events.


Here’s the query:







*[System


   [Provider


     [@Name=’Microsoft-Windows-Security-Auditing’]


       and Task = 12544


       and (band(Keywords,9007199254740992))


       and (EventID=4624)


   ]


   and


   EventData


     [Data


       [@Name=’LmPackageName’] != ‘-‘


     ]



 ]


 


To use this in Event Viewer:



  1. Find the Security log under Windows Logs in the tree pane.

  2. Right-click the Security log, and choose “Filter Current Log…”

  3. Select the “XML” tab.

  4. Check the “Edit query manually” box.

  5. Replace the default query (“*”, or everything in the “<Select>” element), with the text in the box above.  I’ve formatted it for readability.

  6. Click OK

The event view will now be filtered and you’ll only see NTLM logon events.  Additionally, each filtered event will contain a “Detailed Authentication Information” section containing the protocol level (e.g. LM, NTLM, NTLM V2) in the “Package Name” field, and the session key length, if one was negotiated.







Detailed Authentication Information:
            Logon Process: NtLmSsp
            Authentication Package: NTLM
            Transited Services: –
            Package Name (NTLM only): NTLM V2
            Key Length: 128


 

Categories: Descriptions, Tips, Tools Tags:

XPath to generate a list of NTLM authentications on Windows Vista or Later

May 13th, 2010 Comments off

Hi Everyone,


Sas sent me an email complaining that I am not posting as often as I should- sorry about that.  I am working on a different project now but I am still in close touch with the auditing team and I’ll try to do better.


Anyway a question that I hear regularly is, “how do I find all the NTLM authentications on my network”?


Other than running a network trace, the best way I have found (ok invented 🙂  to do this is to look at the logon events in the audit log.


One of the changes we made to the logon events in Windows Vista (and therefore subsequent releases of Windows) was to include the NTLM protocol level in the logon events, if the NTLM auth package was used.


Now, with the new EventLog ecosystem, it’s easy to generate some XPath to find just these events.


Here’s the query:







*[System


   [Provider


     [@Name=’Microsoft-Windows-Security-Auditing’]


       and Task = 12544


       and (band(Keywords,9007199254740992))


       and (EventID=4624)


   ]


   and


   EventData


     [Data


       [@Name=’LmPackageName’] != ‘-‘


     ]



 ]


 


To use this in Event Viewer:



  1. Find the Security log under Windows Logs in the tree pane.

  2. Right-click the Security log, and choose “Filter Current Log…”

  3. Select the “XML” tab.

  4. Check the “Edit query manually” box.

  5. Replace the default query (“*”, or everything in the “<Select>” element), with the text in the box above.  I’ve formatted it for readability.

  6. Click OK

The event view will now be filtered and you’ll only see NTLM logon events.  Additionally, each filtered event will contain a “Detailed Authentication Information” section containing the protocol level (e.g. LM, NTLM, NTLM V2) in the “Package Name” field, and the session key length, if one was negotiated.







Detailed Authentication Information:
            Logon Process: NtLmSsp
            Authentication Package: NTLM
            Transited Services: –
            Package Name (NTLM only): NTLM V2
            Key Length: 128


 

Categories: Descriptions, Tips, Tools Tags:

Windows Management with MDOP

May 13th, 2010 No comments

Previously in this blog, I’ve described how Microsoft® Application Virtualization (App-V) and Microsoft Enterprise Desktop Virtualization (MED-V) can not only help streamline the deployment of the Windows® 7 operating system but also help simplify the maintenance of the desktop environment after deployment. These are definitely big products, and they offer a huge potential to save you time and money. These are also the products that first pop in to many people’s minds when they think about the Microsoft Desktop Optimization Pack (MDOP).


MDOP is more than just App-V and MED-V, however. Advanced Group Policy Management (AGPM) and the Diagnostics and Recovery Toolset (DaRT)—also part of MDOP—are no slouches. In fact, considering how little time and effort it takes to deploy both of these products, and how easy they are to use, they offer a pretty big bang for the buck. Put another way: Their return on investment is huge.


Advanced Group Policy Management


In terms of the Windows 7 deployment lifecycle, AGPM fits neatly into the maintenance phase—or Operate phase, in Microsoft Operations Framework parlance—of the deployment project. Most likely, you’ll be working with Group Policy after deploying Windows 7. Why not use the opportunity to take control of your organization’s GPOs by using AGPM?


All IT pros are aware of Group Policy, but if you’re moving from Windows XP to Windows 7, you might not know how far along it’s come and how great a tool it can be for managing your environment. By using Group Policy, you can define settings for Windows to enforce. For example, you can configure and deploy power-management settings to the computers in your organization, preventing users from changing those settings. Of course, most IT pros think of security settings when they think of Group Policy, and Group Policy certainly gives you a lot of flexibility and control of those settings, too.


Group Policy isn’t just a terrific way to enforce configurations, though. Because it enables you to configure user and computer settings automatically, it’s also a great way to get closer to the dream of replaceable PCs. Group Policy preferences bring you even closer to that dream, letting you manage settings, files, printers, and much more. You can even choose whether to enforce those settings or allow users to change them after you’ve configured them (hence the name preferences).


On its own, Group Policy is an excellent infrastructure for managing your environment, but Group Policy doesn’t provide many features for managing itself. It doesn’t provide a role-based workflow. That is, Group Policy doesn’t have a formal, built-in edit, review, approval, and deployment process.


AGPM adds the missing role-based delegation to Group Policy. You can delegate reviewer, editor, and approver roles per domain or per GPO. Additionally, AGPM gives you a workflow to manage the creation, editing, and deployment of GPOs in production. You can even edit and test GPOs offline, in a test lab, then easily move those GPOs into production and deploy them. Of course, AGPM provides version control for GPOs. Not only does version control let you audit changes, it also lets you quickly roll back changes that fail in production.


Diagnostics and Recovery Toolset


DaRT fits as well in the deployment phase as it does in the maintenance phase of a Windows 7 deployment project. Throughout the development of Windows 7, Microsoft focused closely on the fundamentals. As a result, Windows 7 is a very stable and reliable operating system, but even the most stable operating systems have issues from time to time. During deployment, you can use DaRT to troubleshoot computers that won’t start. After deployment, you can use DaRT for additional troubleshooting, as necessary.


DaRT is very easy to set up. It doesn’t even leave a footprint on your infrastructure. You install DaRT on your desktop computer, create boot media, then use that boot media to start computers that you’re troubleshooting. For example, if a computer fails to start because of a faulty device driver, you can start the computer with DaRT (leaving the installed Windows operating system offline), use the Crash Analyzer tool to find the faulty device driver, and use Computer Management to disable the device driver. Then, you can start the installed Windows operating system on the computer.


And troubleshooting computers that fail to start isn’t DaRT’s only capability. DaRt includes a number of tools that are useful when you want to work offline. For example, you can use DaRT to scan a computer for malware, recover deleted files, or disable unwanted services. Suppose a user has forgotten the password for a local account. You can use DaRT to reset that password.


Getting started with both AGPM and DaRT is simple. In fact, I encourage you to give both a try in a test environment. You can easily evaluate both products by using virtual machines. Existing MDOP customers can download AGPM and DaRT as part of MDOP at the Volume Licensing Service Center (VLSC), MSDN®, and TechNet. You could be up and running with each in under an hour.

Categories: Uncategorized Tags:

XPath to generate a list of NTLM authentications on Windows Vista or Later

May 13th, 2010 No comments

Hi Everyone,


Sas sent me an email complaining that I am not posting as often as I should- sorry about that.  I am working on a different project now but I am still in close touch with the auditing team and I’ll try to do better.


Anyway a question that I hear regularly is, “how do I find all the NTLM authentications on my network”?


Other than running a network trace, the best way I have found (ok invented :-)  to do this is to look at the logon events in the audit log.


One of the changes we made to the logon events in Windows Vista (and therefore subsequent releases of Windows) was to include the NTLM protocol level in the logon events, if the NTLM auth package was used.


Now, with the new EventLog ecosystem, it’s easy to generate some XPath to find just these events.


Here’s the query:







*[System


   [Provider


     [@Name=’Microsoft-Windows-Security-Auditing’]


       and Task = 12544


       and (band(Keywords,9007199254740992))


       and (EventID=4624)


   ]


   and


   EventData


     [Data


       [@Name=’LmPackageName’] != ‘-‘


     ]



 ]


 


To use this in Event Viewer:



  1. Find the Security log under Windows Logs in the tree pane.

  2. Right-click the Security log, and choose “Filter Current Log…”

  3. Select the “XML” tab.

  4. Check the “Edit query manually” box.

  5. Replace the default query (“*”, or everything in the “<Select>” element), with the text in the box above.  I’ve formatted it for readability.

  6. Click OK

The event view will now be filtered and you’ll only see NTLM logon events.  Additionally, each filtered event will contain a “Detailed Authentication Information” section containing the protocol level (e.g. LM, NTLM, NTLM V2) in the “Package Name” field, and the session key length, if one was negotiated.







Detailed Authentication Information:
            Logon Process: NtLmSsp
            Authentication Package: NTLM
            Transited Services: –
            Package Name (NTLM only): NTLM V2
            Key Length: 128


 

Categories: Descriptions, Tips, Tools Tags:

Certificate Path Validation in Bridge CA and Cross-Certification Environments

May 13th, 2010 Comments off

Recently, we’ve had a deluge of questions regarding chain building and selection, especially in the presence of cross-certified certificates. Hopefully, this post will make Crypto API 2 (CAPI2) chaining logic clearer and help enterprise admins design and troubleshoot their public key infrastructure….(read more)

Powershell CRL Copy

May 12th, 2010 Comments off

This script writes a Certification Authority’s Certificate Revocation List to HTTP based CRL Distribution Points via a UNC path. It checks to make sure that the copy was successful and that the CDPs have not and are not about to expire. Alerts/status messages are sent via SMTP and eventlog entries.

Performs the following steps:

  1. Determines if the Active Directory Certificate Services are running on the system. In the case of a cluster make sure to set the $Cluster variable to ‘$TRUE’
  2. Reads the CA’s CRL from %windir%\system32\certsrv\certenroll (defined by $crl_master_path + $crl_name variables). I’ll refer to this CRL as "Master CRL."
  3. Checks the NextUpdate value of the Master CRL to make sure is has not expired. (Note that the Mono library adds hours to the NextUpdate and ThisUpdate values, control this time difference with the $creep variable)
  4. Copy Master CRL to CDP UNC locations if Master CRL’s ThisUpdate is greater than CDP CRLs’ ThisUpdate
  5. Compare the hash values of the CRLs to make sure the copy was successful. If they do not match override the $SMTP variable to send email alert message.
  6. When Master CRL’s ThisUpdate is greater than NextCRLPublish and NextUpdate we want to be alerted when the Master CRL is approaching end of life. Use the $threshold variable to define (in hours, .5 = 30 minutes) how far from NextUpdate you want to receive warnings that the CRLs are soon to expire.

Output:

  1. Run script initially as local administrator to register with the system’s application eventlog
  2. Send SMTP message if $STMP = True. Set variable section containing SMTP settings for your environment
  3. To run this script with debug output set powershell $DebugPreference = "Continue"
  4. The ‘results’ function is used to write to the eventlog and send SMTP messages. Event levels are controlled in the variable section. For example a failed CRL copy you want to make sure the eventlog show "Error" ($EventHigh)

Requirements:

  1. Windows Powershell v2 included in the Windows Management Framework http://support.microsoft.com/kb/968929
  2. Powershell Community Extensions for the Get-Hash commandlet http://pscx.codeplex.com
  3. This powershell script uses a third party, open source .Net reference called ‘Mono.’ More information can be found at http://www.mono-project.com/Main_Page (Note: the Mono assembly Mono.Security.x509.x509CRL adds 4 hours to the .NextUpdate, .ThisUpdate and .IsCurrent function)
  4. Don’t forget to set the powershell set-executionpolicy
  5. Run the script as a scheduled task every 30 minutes
    1. Start Task Scheduler
    2. Select Create Basic Task
    3. Select Daily
    4. Set Repeat task every: 30 minutes for the duration of: Indefinitely
    5. Select Start a program
    6. Set Program/script = %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe
    7. Set Argument = <full_path>crl_copy.ps1
    8. Within the Task’s Properties -> General tab select the following Security Options
      1. Select Change User or Group
      2. Add service account used to perform backups, make sure it has sufficient rights (remember to run the script as local administrator the first time to register with the eventlog)
      3. Select Run whether user is logged on or not
      4. Select Run with highest privileges

ToDos:

If you have any ideas, please share them. A couple of thoughts about future improvements:

  1. Bind to an LDAP directory to retrieve CRL (e.g. ldap://fpkia.gsa.gov/CommonPolicy/CommonPolicy(1).crl)
  2. Use multidimensional arrays to store CDP HTTP and UNC addresses
  3. Improve error handling

Variables:

Make sure to set for your environment

$crl_master_path

Location where the CA writes the Master CRL

$CRL_Name

Name of the CRL

$CDP1_UNC

UNC path to CDP1 make sure the path ends with “\”

$CDP2_UNC

UNC path to CDP2 make sure the path ends with “\”

$CDP1_HTTP

HTTP path to CDP1 make sure the path ends with “/”

$CDP2_HTTP

HTTP path to CDP2 make sure the path ends with “/”

$SMTP

Boolean value to determine if SMTP message is sent via the ‘results’ function

$SmtpServer

Hostname (name, FQDN or IP) of SMTP server

$From

From address of SMTP based message

$To

Recipients of SMTP message

$Title

Subject of SMTP message

$CRL_Evt_Source

Source field of Application eventlog entry

$threshold

# of hours before the reaching the Master CRL’s NextUpdate time (early warning)

$creep

# of hours that the Mono library adds to NextUpdate and ThisUpdate values

$Cluster

Set to ‘true’ if your Certification Authority is clustered

 

clip_image001

################################################
#
# Title:     CRLCopy.ps1
# Date:     4/28/2010
# Author: Paul Fox (MCS)
# Copyright Microsoft Corporation @2010
#
# Description:     This script writes a Certification Authority’s Certificate Revocation List to HTTP based CRL Distribution Points via a UNC path.
#               Performs the following steps:
#                 1) Determines if the Active Directory Certificate Services are running on the system. In the case of a cluster make sure to set the $Cluster variable to ‘$TRUE’
#                 2) Reads the CA’s CRL from %windir%\system32\certsrv\certenroll (defined by $crl_master_path + $crl_name variables). I’ll refer to this CRL as "Master CRL."
#                 3) Checks the NextUpdate value of the Master CRL to make sure is has not expired. (Note that the Mono library adds hours to the NextUpdate and EffectiveDate values, control this time difference with the $creep variable)
#                 4) Copy Master CRL to CDP UNC locations if Master CRL’s ThisUpdate is greater than CDP CRLs’ ThisUpdate
#                 5) Compare the hash values of the CRLs to make sure the copy was successful. If they do not match override the $SMTP variable to send email alert message.
#                 6) When Master CRL’s ThisUpdate is greater than NextCRLPublish and NextUpdate we want to be alerted when the Master CRL is approaching end of life. Use the $threshold variable to define (in hours) how far from
#                    NextUpdate you want to receive warnings that the CRLs are soon to expire.              
#
# Output: 1) Run script initially as local administrator to register with the system’s application eventlog
#         2) Send SMTP message if $STMP = True. Set variable section containing SMTP settings for your environment
#         3) To run this script with debug output set powershell $DebugPreference = "Continue"
#         4) The ‘results’ function is used to write to the eventlog and send SMTP messages. Event levels are controlled in the variable section. For example a failed CRL copy you want to make sure the eventlog show "Error" ($EventHigh)
#      
# Requirements: 1) Windows Powershell v2 included in the Windows Management Framework http://support.microsoft.com/kb/968929
#                 2)Powershell Community Extensions for the Get-Hash commandlet http://pscx.codeplex.com
#                 3) This powershell script uses a third party, open source .Net reference called ‘Mono’    More information can be found at http://www.mono-project.com/Main_Page
#                             Note: the Mono assembly Mono.Security.x509.x509CRL adds 4 hours to the .NextUpdate, .ThisUpdate and .IsCurrent function
#                         4) Don’t forget to set the powershell set-executionpolicy
#
# ToDos: Bind to an LDAP directory to retrieve CRL (e.g. ldap://fpkia.gsa.gov/CommonPolicy/CommonPolicy(1).crl)
#        Use multidimensional arrays to store CDP HTTP and UNC addresses
#
# Debug: To run this script with debug output set powershell $DebugPreference = "Continue"
#
################################################

################################################
#
# Function:     Results
# Description:    Writes the $evt_string to the Application eventlog and sends
#                SMTP message to recipients if $SMTP = [bool]$true
#   
#
################################################
function results([string]$evt_string, [int]$level ,[bool]$sendsmtp)
{
write-debug "******** Inside results function ********"
write-debug "SMTP = $sendsmtp"
write-debug "Evtstring = $evt_string"
write-debug "Level: $level"
###############
#if eventlog does not exist create it (must run script as local administrator once to create)
###############
if(![system.diagnostics.eventlog]::sourceExists($CRL_Evt_Source))
    {
        $evtlog = [system.diagnostics.eventlog]::CreateEventSource($CRL_Evt_Source,"Application")
    }

###############
# set eventlog object
###############
$evtlog = new-object system.diagnostics.eventlog("application",".")
$evtlog.source = $CRL_Evt_Source

###############
# write to eventlog
###############
$evtlog.writeEntry($evt_string, $level, $EventID)

if($sendsmtp)
    {
    $SmtpClient = new-object system.net.mail.smtpClient
    $SmtpClient.host = $SmtpServer
    $Body = $evt_string
    $SmtpClient.Send($from,$to,$title,$Body)
    }
}

################################################
#
# Main program
#
################################################

################################################
#
# Add Mono .Net References
# If running on an x64 system make sure the path is correct
#
################################################
Add-Type -Path "C:\Program Files (x86)\Mono-2.6.4\lib\mono\2.0\Mono.Security.dll"

################################################
#
# Variables
#
################################################
$crl_master_path = "c:\windows\system32\certsrv\certenroll\"
$CRL_Name = "master.crl"
$CDP1_UNC = "\\cdp1\cdp1\"
$CDP2_UNC = "\\cdp2\cdp2\"
$CDP1_HTTP = "http://keys1.your.domain/"
$CDP2_HTTP = "http://keys2.your.domain/"

$SMTP = [bool]$false
$SmtpServer = "your.mx.mail.server"
$From = "crlcopy@your.domain"
$To = "CAAdmins@your.domain"
$Title = "CRL Copy Process Results"

$CRL_Evt_Source = "CRL Copy Process"
$EventID = "5000"
$EventHigh = "1"
$EventWarning = "2"
$EventInformation = "4"

$newline = [System.Environment]::NewLine
$time = Get-Date
$threshold = 1
$creep = -4
$Cluster =  [bool]$false

################################################
#
# Is certsrv running? Is it a clustered CA?
# If clustered it is not running don’t send an SMTP message
#
################################################
$service = get-service | where-Object {$_.name -eq "certsvc"}

if (!($service.Status -eq "Running"))
    {
    if($Cluster)
        {
       $evt_string = "Active Directory Certificate Services is not running on this node of the cluster. Exiting program."
       write-debug "ADCS is not running. This is a clustered node. Exiting"
       results $evt_string $EventInformation $SMTP
       exit
       }
    else
        {
        $evt_string = "**** IMPORTANT **** IMPORTANT **** IMPORTANT ****" +  $newline + "Certsvc status is: " + $service.status + $newline
        write-debug "ADCS is not running and not a clustered node. Not good."
        results $evt_string $EventHigh $SMTP
        exit
        }
      }
else
     {
     write-debug "Certsvc is running. Continue."
     }

################################################
#
# Pull CRLs from Master and HTTP CDP locations
# Not going to bother with Active Directory since this
# is probably a Windows Enterprise CA (todo)
#
################################################
$CRL_Master = [Mono.Security.X509.X509Crl]::CreateFromFile($crl_master_path + $CRL_Name)
$web_client = New-Object System.Net.WebClient
$CDP1_CRL = [Mono.Security.X509.X509Crl]$web_client.DownloadData($CDP1_HTTP + $CRL_Name)
$CDP2_CRL = [Mono.Security.X509.X509Crl]$web_client.DownloadData($CDP2_HTTP + $CRL_Name)

################################################
#
# Debug section to give you the time/dates of the CRLs
#
################################################
if($debugpreference -eq "continue")
    {
    write-debug $newline
    write-debug "Master CRL Values"
    $debug_out = $CRL_Master.ThisUpdate.AddHours($creep)
    write-debug "Master ThisUpdate $debug_out"
    $debug_out = $CDP1_CRL.ThisUpdate.AddHours($creep)
    write-debug "CDP1_CRL ThisUpdate: $debug_out"
    $debug_out = $CDP2_CRL.ThisUpdate.AddHours($creep)
    write-debug "CDP2_CRL ThisUpdate: $debug_out"
    $debug_out = $CRL_Master.NextUpdate.AddHours($creep)
    write-debug "Master NextUpdate: $debug_out"
    $debug_out = $CDP1_CRL.NextUpdate.AddHours($creep)
    write-debug "CDP1_CRL NextUpdate: $debug_out"
    $debug_out = $CDP2_CRL.NextUpdate.AddHours($creep)
    write-debug "CDP2_CRL NextUpdate: $debug_out"
    write-debug $newline
    }

################################################
#
# Determine the status of the master CRL
# Master and CDP CRLs have the same EffectiveDate (Mono = ThisUpdate)   
#
################################################
if($CRL_Master.NextUpdate.AddHours($creep) -gt $time)
        {
        # This is healthy Master CRL
        write-debug "Master CRL EffectiveDate: "
        write-debug $CRL_Master.ThisUpdate.AddHours($creep)
        write-debug "Time now is: "
        write-debug $time
        write-debug $newline
        }
else
        {
        # Everything has gone stale, not good. Alert.
        write-debug "Master CRL has gone stale"
        $evt_string = "**** IMPORTANT **** IMPORTANT **** IMPORTANT ****" + $newline + "Master CRL: " + $CRL_Name + " has an EffectiveDate of: " + $CRL_Master.ThisUpdate.AddHours($creep) + " and an NextUpdate of: " + $CRL_Master.NextUpdate.AddHours($creep) + $newline + "Certsvc status is: " + $service.status
        results $evt_string $EventHigh $SMTP
        exit
        }
################################################
#   
# Determine what the status of the CDPs
# Does the Master and the CDP CRLs match up?
#
################################################
if (($CRL_Master.ThisUpdate -eq $CDP1_CRL.ThisUpdate) -and ($CRL_Master.ThisUpdate -eq $CDP2_CRL.ThisUpdate))
    {
    write-debug "All CRLs EffectiveDates match"
write-debug $CRL_Master.ThisUpdate
        write-debug $CDP1_CRL.ThisUpdate
        write-debug $CDP2_CRL.ThisUpdate
        write-debug $newline
        }
################################################
#
# New Master CRL, Update CDP CRLs if or or both are old
# would be nice to use the ‘CRL Number’
# Compare the hash values of the Master CRL and CDP CRLs
# after the copy command to make sure the copy completed
#
################################################
elseif (($CRL_Master.ThisUpdate -gt $CDP1_CRL.ThisUpdate) -or ($CRL_Master.ThisUpdate -gt $CDP2_CRL.ThisUpdate))
    {
    # There is a new master CRL, copy to CDPs
    write-debug "New master crl. Copy out to CDPs"
    $source = Get-Item $crl_master_path$CRL_Name
    Copy-Item $source $CDP1_UNC$CRL_Name
    Copy-Item $source $CDP2_UNC$CRL_Name
    # Compare the hash values of the master CRL to the CDP CRL
    # If they do not equal alert via SMTP by setting the $SMTP boolian value to ‘$true’
    $master_hash = get-hash $source
    $cdp1_hash = get-hash $CDP1_UNC$CRL_Name
    $cdp2_hash = get-hash $CDP2_UNC$CRL_Name
    if(($master_hash.HashString -ne $cdp1_hash.HashString) -or ($master_hash.HashString -ne $cdp2_hash.HashString))
        {
        $evt_string = "CRL copy to CDP location failed:" +$newline +"Master CRL Hash: " +$master_hash.HashString +$newline + "CPD1  Hash:" +$cdp1_hash.HashString +$newline + "CDP2 Hash:" +$cdp2_hash.HashString +$newline
        # Make sure the email alert goes out. Override the $SMTP variable
        write-debug $newline
        write-debug "CRLs copied to CDPs hash values do not match Master CRL Hash"
        write-debug "Master CRL Hash value"
        write-debug $master_hash.HashString
        write-debug "CDP1 CRL Hash value"
        write-debug $cdp1_hash.HashString
        write-debug "CDP2 CRL Hash value"
        write-debug $cdp2_hash.HashString
        $SMTP = [bool]$true
        results $evt_string $EventHigh $SMTP
        exit
        }
    else
        {
        $evt_string = "New Master CRL published to CDPs. " + $CRL_Name + " has an EffectiveDate of: " + $CRL_Master.ThisUpdate.AddHours($creep) + " and an NextUpdate of: " + $CRL_Master.NextUpdate.AddHours($creep)
        results $evt_string $EventInformation $SMTP
        }
    }
else
    {
     write-debug "logic bomb, can’t determine where the Master CRL is in relationship to the CDP CRLs"
     }
################################################
#
# Master CRL’s ThisUpdate time is in between the NextCRLPublish time and NextUpdate.
# Note Mono does not have a method to read ‘NextCRLPublish’
# The CA Operator can define the ‘$threshold’ at which that want to start receiving alerts
#
################################################
if (($CRL_Master.NextUpdate.AddHours($creep) -gt $time) -and ($CRL_Master.ThisUpdate.AddHours($creep) -lt $time))
    {
    write-debug "checking threshold"
    # Is the Master CRL NextUpdate within the defined alert threshold?
    if($CRL_Master.NextUpdate.AddHours(-($threshold – $creep)) -lt $time)
        {
        write-debug "***** WARNING ****** Master CRL NextUpdate has a life less than threshold."
        write-debug $CRL_Master.NextUpdate.AddHours(-($threshold – $creep))
        $evt_string = "***** WARNING ****** Master CRL NextUpdate has a life less than threshold of: " + $threshold + " hour(s)" + $newline + "Master CRLs NextUpdate is: " + $CRL_Master.NextUpdate.AddHours($creep) + $newline +"Certsvc service is: " + $service.Status
        results $evt_string $EventWarning $SMTP
        }
    else
        {
        write-debug "Within the Master CRLs NextCRLPublish and NextUpdate period. Within threshold period."
        write-debug $CRL_Master.NextUpdate.AddHours(-($threshold – $creep))
        # Uncomment the following if you want notification on the CRLs
        #$evt_string = "Within the Master CRLs NextCRLPublish and NextUpdate period. Alerts will be send at " + $threshold + " hour(s) before NextUpdate period is reached."
        #results $evt_string $EventInformation $SMTP
        }
    }
else
    {
     write-debug "logic bomb, can’t determine where we are in the threshold"
    }

Categories: Uncategorized Tags:

Introduction to Windows Home Server “Vail” SDK

May 7th, 2010 No comments

Hi everyone! My name is Dileep and I am a Development lead with Windows Home Server team. It has been more than a week since we released the beta builds of the next version of Windows Home Server “Vail”, and we are very encouraged by the download and the feedback pouring in through connect. Thank you for everyone who is participating in the beta program! Alongside that, we thought this is the right time to start discussing some of the features of Vail and its extensibility with the community to get the excitement started for building great, cool and useful add-ins for the next version of Windows Home Server. The SDK is also available to download from Connect and if you already tried it out, you might have noticed that the extensibility options for Vail are pretty broad in comparison with first version of Windows Home Server. There are many ways one can extend Vail – by extending Dashboard (old Admin Console), Launchpad or by building a Provider. You can even build an addin which has both server and client components to it.  Vail SDK contains information about all these plus APIs for many other core features like Alerts framework, Storage (Server Folders & Hard Drives), Computers & Backup, Identity, Remote Access, Media Streaming and much more! The SDK also contains detailed documentation as to how you can build, package and deploy your add-ins to Vail Server and clients. All-in-all, you can see that we’ve tried to put up a much broader and more powerful SDK in place compared to the previous version.


There is a lot of information to digest in Vail and SDK in particular. Hence we are taking this opportunity to do a series of blog posts to talk about the various extensibility points of Vail in a much higher level than the actual SDK documentation. The idea is to give developers an overview of the capabilities of Vail SDK along with providing guidance as to how to go about building addins the right way so that the user experience or performance is not compromised. In the first of such posts, I am discussing the Vail ‘Dashboard’ and its extensibility vis-à-vis Home Server v1 here.


The Administration console in Home Server v1 has been renamed to ‘Dashboard’ to better reflect the information it provides. Just like the Admin console in v1, Dasbhboard still is the main user interface for administrative or management type of tasks on the Server. Dashboard is where you would go to monitor the health of the network, create user accounts, view backups, add shared folders, increase storage capacity, enable or disable media streaming etc. Dashboard is still not the place to put any day-to-day non-administrative tasks. I have posted a document on connect website (link is given below) which talks about the differences in the Dashboard UI compared to Home Server v1 Admin Console. In the document I talk about the new Dashboard layout, the new UI elements introduced, the three different kinds of tabs that one can build as well as plugging into the existing Microsoft tabs and wizards. I also cover the extensibility aspects of Home Server v1 Admin console which are no longer available in Vail.


You can download the complete document here.


(You will have to sign in to Microsoft Connect site.)


I hope I was able to give an adequate overview of the changes and new features in Windows Home Server ‘Vail’ Dashboard in the document, especially when compared to v1. Please remember that this document is meant as a high level overview of the extensibility points, and the low-level details of all of those extensibility APIs, documentation, samples and templates are available in the Vail SDK. We would love to hear your comments and feedback. Moreover we would love to get all of you started on writing cool addins for Vail. Please use Vail SDK Forum for discussing seeking assistance for the SDK. In subsequent posts, we’ll cover other topics such as Launchpad, building Providers, addin deployment, various object models etc. Happy coding!


Download Dashboard overview document


Download “Vail” SDK

Discuss about “Vail” SDK

The Windows 7 Migration Continues….

Whew! The Microsoft® Management Summit is behind me and I can start looking forward to our 10-city bus tour. On a similar tour through Europe last year, we had a great time telling everyone we met about the Microsoft Desktop Optimization Pack (MDOP) and how it can make your life much easier (and make you a hero in your organization). The tour starts in May; I’ll definitely tell you more about it as the date gets a bit closer.


In the meantime, I wanted to pick up where I left off in my last blog post. Previously, I described how Microsoft Application Virtualization (App-V) can help streamline the deployment process. Sequencing for App-Va process that’s similar to packaging applications such as .msi filesis straightforward. Deploying sequenced applications can be as simple as assigning them to users. Certainly, using App-V to deploy applications is far more efficient and less painful than using traditional repackaging techniques and including them in an image or using a distribution system.


The story doesn’t end there, however. App-V continues to add value even after your deployment has been stabilized and moving to a maintenance phase. App-V can help you better manage the application inventory. For example, imagine that you need to recall an application after deployment. Without automation, you must manually remove applications from each computer. Even with automation, you can never be sure that applications were completely removed; they leave footprints on the computer.


In contrast, App-V applications have no footprint. They are virtualized, so completely removing an application is as easy as removing the assignment. Afterward, the application is no longer available to the useras if the application was never installed in the first place.


Because virtualized applications don’t have footprints, they’re also easy to update: Simply sequence the new version, and add it. App-V seamlessly updates the application, without affecting users, requiring downtime, or demanding a reboot. From users’ perspectives, the new version appears automatically the next time they launch the application. Compare this scenario to the process of updating applications in images (time-consuming) or deploying new versions of applications by using a software distribution system (disruptive).


As if streamlining application management wasn’t enough, App-V can have a more-direct impact to your organization’s bottom line. By using metering rules in App-V, you can better understand the licenses used versus the total number of licenses purchased. Metering rules can help organizations that want to ensure compliance with their vendors avoid over-purchasing licenses.


If you haven’t looked at App-V in a while, now’s the time to do so—especially if you’re already engaged in a Windows® 7 deployment project. Microsoft recently released App-V 4.6, which has several improvements, including:


·         Support for true 64-bit applications


·         Support for  desktops and servers running 64-bit Windows


·         Integration with Windows 7 AppLocker, BitLocker® Drive Encryption, and BranchCache


·         Thirteen new languages to support global businesses


App-V and Microsoft Enterprise Desktop Virtualization (MED-V) are big products, and sometimes they cast a big shadow over smaller MDOP features such as Microsoft Advanced Group Policy Management (AGPM) and the Microsoft Diagnostics and Recovery Toolset (DaRT). Both AGPM and DaRT can help make your job as an IT pro easier, though. AGPM provides a role-based workflow to Group Policy management. DaRT provides a powerful set of troubleshooting tools that can help you diagnose and fix problems with Windows 7 offline. Both products are incredible easy to deploy and use. I’ll provide more details about them in my next post.

Categories: Uncategorized Tags:

Continuing our Move to Windows 7

After recently spending a week in Nashville and Dallas, talking to folks about desktop virtualization, I’m getting back from the Microsoft® Management Summit in Las Vegas. It seems like a terrible shame to spend a week in Las Vegas locked up in a conference room, giving demos and presentations , but the blue guys had to wait. For now, I want to continue talking about the Microsoft Desktop Optimization Pack (MDOP), and how it can help make the move to the Windows® 7 operating system a bit easier.


In my last blog post, I described how MDOP can help you plan for Windows 7. For example, you can build an inventory to help you choose which applications you want to leave behind, making the transition less cluttered. You can build this inventory by using the Microsoft Asset Inventory Service (AIS). You can also use Microsoft System Center Desktop Error Monitoring (DEM) to help monitor the impact of your Windows 7 migration.


After the planning phase, a Windows 7 deployment project moves to the development phase. Whether or not that phase is formal in your organization, MDOP can help make it easier.


Managing application compatibility is one of the most painful and time-consuming parts of desktop deployment. Using AIS to build an inventory, and then rationalizing the inventory to reduce its size, is the first step in reducing the pain. But most organizations still go through the tedious process of testing each application and mitigating issues.


MDOP provides a different option. Instead of testing and mitigating each application on Windows 7, you can use Microsoft Enterprise Desktop Virtualization (Med-V) to deploy unverified applications. After you’ve rolled out Windows 7 and caught your breath, you can revisit those applications: testing their compatibility with Windows 7, mitigating any issues you find, and deploying them natively. By using this strategy, you can move to Windows 7 more quickly, skipping much of the pain associated with application compatibility.


Application packaging and deployment is another chore. For a large-scale deployment project, you need to automate the deployment and configuration of many—too many—applications. First, you repackage applications so that they install and configure themselves silently. After testing them, you must decide how to deploy them: by including them in your image or by using products such as Microsoft System Center Configuration Manager 2007. How you package and deploy these applications also has a significant impact on the maintenance experience later. (I’ll discuss this topic more deeply in my next blog post.)


MDOP has a feature to help improve this aspect of deployment, too: Microsoft Application Virtualization (App-V). Our customers say that packaging applications for use with App-V (a process called sequencing) is easier and quicker than packaging the applications for native deployment. It isn’t uncommon to sequence a large application in less than an hour. However, the big win for App-V is deployment. Using App-V to deploy applications can be far easier than other deployment methods. For example, after sequencing an application, deploying it is as simple as assigning the application to a user or group. Recalling the application is just as easy, and App-V enables you to update applications without interrupting users.


Microsoft provides numerous free, powerful deployment tools for Windows 7. The Microsoft Deployment Toolkit 2010 is the most notable example. For our Software Assurance customers that already license MDOP, I encourage you to dust it off and consider the value that it can add to these tools. In my next blog post, I’ll describe MDOP features that can help you better manage your organization’s desktop computers after Windows 7 deployment. For more information about how MDOP fits into the desktop deployment process, see Optimizing Windows 7 Deployment with MDOP.

Categories: Uncategorized Tags: