Archive

Archive for December, 2018

Visual Studio Code Updates for Java Developers: Rename, Logpoints, TestNG and More

December 14th, 2018 No comments

As we seek to continually improve the Visual Studio Code experience for Java developers, we’d like to share couple new features we’ve just released. Thanks for your great feedback over the year, we’re heading into the holidays with great new features we hope you’ll love. Here’s to a great 2019!

Rename

With the new release of the Eclipse JDT Language Server, we’re removing the friction some developers experienced in ensuring renamed Java classes perpetuate into the underlying file in Visual Studio Code. With the update, when a symbol is renamed the corresponding source file on disk is automatically renamed, along with all the references.

Debugger

VS Code Logpoints is now supported in the Java Debugger. Logpoints allow you to inspect the state and send output to debug console without changing the source code and explicitly adding logging statements. Unlike breakpoints, logpoints don’t stop the execution flow of your application.

To make debugging even easier, you can now skip editing the “launch.json” file by either clicking the CodeLens on top of the “main” function or using the F5 shortcut to debug the current file in Visual Studio Code.

TestNG support

TestNG support was added to the newest version of the Java Test Runner. With the new release, we’ve also updated the UI’s of the test explorer and the test report. See how you can work with TestNG in Visual Studio Code.

We’ve also enhanced our JUnit 5 support with new annotations, such as @DisplayName and @ParameterizedTest.

Another notable improvement in the Test Runner is that we’re no longer loading all test cases during startup. Instead, the loading now only happens when necessary, e.g. when you expand a project to see the test classes in the Test viewlet. This should reduce the resource needed on your environment and enhance the overall performance of the tool.

Updated Java Language Pack

We’ve included the recently released Java Dependency Viewer to the Java Extension Pack as more and more developers are asking for the package view, dependency management and project creation capability provided by this extension. The viewer also provides a hierarchy view of the package structure.

Additional language support – Chinese

As the user base of Java developers using Visual Studio Code is expanding around the world, we decided to make our tool even easier to use for our users internationally by offering translated UI elements. Chinese localization is now available for Maven and Debugger, it will soon be available for other extensions as well. We’d also like to welcome contributions from community for localization as well.

IntelliCode and Live Share

During last week’s Microsoft Connect() event, we shared updates on the popular Visual Studio Live Share and Visual Studio IntelliCode features. The new IDE capabilities – all of which support Java – provide you with even better productivity with enhanced collaboration and coding experience that you can try now in Visual Studio Code.

Just download the extensions for Live Share and IntelliCode to experience those new features with your friends and co-workers. Happy coding and happy collaborating!

Attach missing sources

When you navigate to a class in some libraries without source code, you can now attach the missing source zip/jar using the context menu “Attach Source”.

We love your feedback

Your feedback and suggestions are especially important to us and will help shaping our products in future. Please help us by taking this survey to share your thoughts!

Try it out

Please don’t hesitate to try Visual Studio Code for your Java development and let us know your thoughts! Visual Studio Code is a lightweight and performant code editor and our goal is to make it great for the entire Java community.

Xiaokai He, Program Manager
@XiaokaiHeXiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.

 

Get to code: How we designed the new Visual Studio start window

December 13th, 2018 No comments

By now, many of you may have noticed a very prominent change to the launch of Visual Studio in Visual Studio 2019 Preview 1. Our goal with this new experience is to provide rapid access to the most common ways that developers get to their code: whether it’s cloning from an online repository or opening an existing project.

New start window in Visual Studio 2019

A month ago, we shared a sneak peek of the experience (in the blog post A preview of UX and UI changes) and mentioned the research and observation that we used as input into the design and development. This is the story about how we got there.

How & why we began this journey

Two years ago, we reinvented the Visual Studio installation experience to offer developers the ability to install exactly what they need and reduce the installation footprint of Visual Studio. We broke Visual Studio down into smaller packages and components and then grouped them together into development-focused workloads (which are bundles of packages and components). We quickly realized that the installation was just one piece of the journey our users take when they are getting started with Visual Studio.

We began to think more broadly – beyond just the installation of the bits, to explore the developer journey of getting to code. This journey starts from the moment you think about that great idea for an app all the way to writing your first lines of code, and integrating Visual Studio into your daily routine. To help us understand what developers were doing throughout their first launch, we built a data informed model of the customer journey.

Our Visual Studio Customer Journey

These insights helped improve installation success rates and address common failures but lacked the ability to answer questions around why some users drop off from one step to the next and how do we make sure Visual Studio meets the need of millions of developers? Some of you may even be trying to better understand how to make your own consumer or business customers more successful with your products.

So, from there we turned to existing mechanisms like surveys, interviews, social media (blogs), and A/B experimentation to help us understand where and how to improve these experiences. The surveys received an overwhelming number of responses (thank you to those of you who contributed!) and provided us with a foundation of anecdotes that helped us understand our individual users even more. They helped us recognize the different types of users coming into our “front door”, which is to say the first place they learn about Visual Studio and decide to download it. We identified through early cohort analysis, that almost half (50%) of users downloading were brand new to Visual Studio (but not necessarily new to coding) and only some of those users came back to Visual Studio for a second time. This was a surprising moment for us as we had no idea why this was happening!

Going beyond the data

We knew we needed deeper insights into how we could help new users be successful with their first time in Visual Studio and assist them in making the best choices along their journey. Fundamentally, we wanted to identify what the “Magic Moment” would be for them in Visual Studio. The “Magic Moment” is a phrase commonly used by product teams that maps a set of events or an experience a user has with a product that transforms them from a casual user trying something out to an avid, loyal user who finds success and even promotes the product. This moment is at the very core of identifying patterns to indicate users who will integrate a product or tool into their daily routine. We didn’t know what our Magic Moment was just yet, but we had a lot of ideas on what we believed it might be, so we asked ourselves:

Is there something new Visual Studio users do in the IDE that indicates they will return or abandon after 5 minutes?

We set out to answer this question by:

  1. Observing developers, both brand new to Visual Studio and seasoned, as they find Visual Studio to download, configure, and start writing code.
  2. Identifying the problems that they encountered throughout their journey to code.
  3. Building hypotheses and concepts around getting to a “Magic Moment.”
  4. Validating the hypotheses and concepts via an iterative process of weekly testing and experimentation.

Our iterative process from interns to external users

We started by asking our summer interns to install and use Visual Studio for the first time in our user experience (UX) lab and document their journey. We were surprised at how long and difficult the journey from download to writing and running code was for them. We also gained insights into their expectations for Visual Studio based on other editors and IDEs they had previous experience with.

Our first step was simple: we gave participants a clean virtual machine with only Windows 10 installed and asked them to find, install, use Visual Studio, and “do whatever is natural to you to get started.”

We then just watched…

One of our participants in the User Experience lab

Turns out even students think the 40 year-old concept of writing a “hello world” app is a great starting place. What also became extremely clear to us was that moment when users were writing and running code – we saw them become more engaged with Visual Studio and having fun. We saw an emotional change when they wrote their own code, compiled it, fixed some things, and ran it. We had a strong inkling that we were even closer to the “Magic Moment.”

We then scaled up our research to bring in more new and experienced developers every week. We tested out many ideas using low fidelity mockups built in PowerPoint and eventually moved to higher fidelity prototypes. We tried variations of tasks and UIs as we tested our assumptions. There were multiple problems to solve but one of the most significant became clear when we saw new Visual Studio developers struggle when trying to open code or create a new project. The first view of Visual Studio was overwhelming with no clear guidance of what they should do first. So, we set out to focus on that stage of the journey in our designs and storyboards for Visual Studio 2019. The design process looked a little something like this:

Visualizing the friction points in the customer journey

Evolution of our start window designs

Bringing all our insights into Visual Studio 2019

From all the design explorations, experiments, and observations, evolved an idea of the start window which would offer a focused experience to quickly get you to writing those first lines of code. Given our insights, we wanted to ensure users, especially those new to Visual Studio (some who are already experienced in other development tools), could quickly experience that “Magic Moment” of writing their first lines of code and successfully run it each and every time.

The start window would support new Visual Studio users by:

  1. Highlighting the choices, they must make during the early, crucial steps of getting started with Visual Studio.
  2. Removing distractions and providing suggestions for the best path forward.
  3. Enabling a search and filter focused experience for creating a new project.
  4. Promoting a streamlined online repository-first workflow.

Developers who are already well-versed and experienced with Visual Studio might be wondering what’s in it for them. What we’ve heard from experienced developers is that onboarding junior developers is very challenging, so we believe the new start window is a step towards ensuring they are more successful in getting to their code each time they open the IDE. We will also continue to preserve and enable existing workflows in the start window to support the muscle memory that experienced users have established with Visual Studio. Lastly, seasoned developers in the user experience lab were delighted by the new “Clone and check out code” experience which brings your online repositories right to your fingertips on launch.

Anatomy of the start window

We know the list of recent projects and solutions is one of the most common ways developers open code, so it was very important that we maintain this list in the most prominent part of the start window. We also knew it was VERY important to not break existing flows where developers open projects/solutions from the desktop (by double-clicking) – so the start window will never show in this flow, as your code will always take priority and open immediately.

Bringing a more focused, source-first experience of clone and check out code (like the Start Page had) to the forefront of Visual Studio was an opportunity to not only show new users the power of source providers like GitHub and Azure DevOps. We have also heard from our research with developers that this action is a more prominent part of a their daily workflow.

Opening a project or solution brings forward the concept of Visual Studio project and solution files that you can click on to open your entire codebase if you have an MSBuild-based solution. But if you use a non-MSBuild build system, such as CMake, then we would recommend opening a local folder instead. We’ve been investing in support to allow you to browse, edit, build, and debug any code without a .sln or project file. You can learn more about Open Folder, including how to configure a different build system to work with it, in our documentation. In addition, if you want to browse loose files in Visual Studio, you can just open the containing folder and pick up the file from the folder view of Solution Explorer.

Creating a new project is a big part of getting to your code in Visual Studio even when it’s prototyping some throwaway code in a simple template (like the Console App) or trying out the capabilities of a new platform or language for the first time. Based on the workloads you install, you’ll always see the most commonly used templates first. We’ve observed that developers first think about the kind of application they want to build (a mobile app, a website, etc.) and not the language – so we removed the language centric tree hierarchy and have improved searching and filtering to help you get to the right template more quickly. You’ll also find a more prominent list of recently used templates so you can quickly get back to your favorite template with a single click.

Lastly, continue without code offers developers a one-time escape from the window for the times when a different action is needed to start work (like joining a Live Share collaboration session, or attaching to a process). Alternatively, hitting the ESC key will also dismiss the window and immediately bring up the IDE. If there are other scenarios that you perform frequently and think should have a home on the start window (like for example attaching to a debugger), please upvote or create a suggestion in our Developer Community.

What’s next for this experience

In just a week, after releasing Visual Studio 2019 Preview 1, we’ve heard developers tell us the start window provides a “focused way to get to the most common things.” We’re already working on some of your feedback, such as support for Team Foundation Version Control and better scan ability in the recent solutions/projects list.

The start window experience is just one part of the journey we’re on to continue to streamline the onboarding experience to Visual Studio. Our longer-term vision includes improvements like reducing the number of choices required to download and install and offering relevant samples and tutorials to assist when learning a new language or platform.

Tell us what you think

As you can tell from the journey we’ve taken to get here, your feedback is essential to making this experience better. We’d love to have you try it out for a few hours in your everyday work. If it still doesn’t jive with you, then you can revert to the previous Visual Studio ‘start’ behavior. Go to Tools > Options and search for ‘Preview Features’ which will allow you to configure this setting along with a few other preview features. Alternatively, you can find the option in Tools > Options | Environment | Startup.

Tools | Options settings for Preview Features

After you’ve experienced Visual Studio 2019 Preview 1, please help us make this the best Visual Studio yet by letting us know what you like or tell us what is not working well for you. And of course, if you run into any issues, please let us know by using the Report a Problem tool in Visual Studio. You can also head over to the Visual Studio Developer Community to track your issue or, even better, suggest a feature, ask questions, and find answers from other developers.

Cathy Sullivan, Principal Program Manager, Visual Studio Platform
@cathysull

Cathy Sullivan is a Principal Program Manager on the Visual Studio Acquisition team focused on ensuring developers have a smooth onboarding experience for the first time and every time with Visual Studio. She has worked on many Visual Studio Platform teams building C#/VB language features, core UI/Shell features such as Solution Explorer and designed the beloved dark theme used by many developers.

Categories: Visual Studio, Visual Studio 2019 Tags:

New Azure DevOps Work Item Experience in Visual Studio 2019

December 12th, 2018 No comments

In previous versions of Visual Studio, the work item experience was centered around queries, which need to be created and managed to find the right work items. In Visual Studio 2019, we have removed queries and added a new view for work items centered at the developer. This allows the developer to quickly find the work they need and associate them to their pending changes. Removing the need for queries.

For those users who use Visual Studio for work item planning and triage, we encourage you to do so from Azure Boards. Azure Boards is the central place to manage your backlog, triage work, and plan your sprints.

Be sure to read the full documentation on how to use the new Azure Boards work item experience in Visual Studio 2019.

Work Items Hub

The Work Items Hub in Visual Studio 2019 has many of the same views found in the Work Items Hub in Azure Boards. It is where developers can quickly find the work items that are important to them. Filters and views can provide specific lists of work items such as Assigned to Me, Following, Mentioned, and My Activity. From any these views you can do quick do inline edits, assign work, create branches, and associate work to pending changes.

Create branches and relate work

Create a branch directly from a work item. This will automatically associate that work item to any current changes. Alternatively, you can relate a work item to a current set of changes already in progress. You can associate as many work items to a commit as you would like.

#Mention in commit message

Search and select work items directly from the commit message. Associate as many work items to the commit as you would like.

We need your help

We want to make the best experience for developers who use Azure Boards in Visual Studio. Please provide your feedback by sending us bugs and suggestions. You can contact us on Twitter at @danhellem or @AzureDevOps..

Dan Hellem, Program Manager, Azure DevOps
@danhellem

Dan is a Program Manager with Microsoft’s Azure DevOps on the Azure Boards team. Before coming to Microsoft in 2012, Dan spent his career building applications using Microsoft technologies and assembling Agile teams centered on delivering high quality software to users.

A Year of Q#

December 11th, 2018 No comments

The Quantum Architecture and Computation group launched Q#, our quantum computing programming language, a year ago on December 11th, 2017.

Q# 0.1 was the result of a lot of hard work from a small, dedicated team of developers, researchers, and program managers. We had made the decision to build a domain-specific language for quantum computing about six months before we launched, so we were on a very tight schedule. We were lucky to have a great team of people who all pitched in and did what needed to be done so that we could meet our extremely aggressive timetable.

Start!

Inside the team, we speculated on what level of interest Q# would attract. We hoped that we might receive a few hundred downloads, but we were blown away when we crossed 1,000 users by about 9 hours after launch. That said, with so many users installing the Quantum Development Kit and trying to write simple programs in it, bugs started popping up. In order to deliver the best experience for our users, we released a patch in January that addressed issues like floating-point literals that were handled incorrectly in certain locales, and allowed the simulator to run on older machines without vector instructions support.

We also addressed portability feature requests in our 0.2 release in February 2018, which saw us move from the .NET Framework to the open-source, cross-platform .NET Core. This allowed us to easily support macOS and Linux as well as Windows for building and running Q# code. We also added support for VS Code on all platforms (the 0.1 release was limited to Visual Studio on Windows). As part of the 0.2 release, we were able to make the majority of our libraries and samples available under an MIT license.

Long Hot Summer

We decided to take advantage of one of our team members’ expertise in organizing coding competitions and run a Q# coding competition to engage non-quantum developers with Q# and quantum computing. After a couple of months of preparation, we ran the Q# Coding Contest in early July. Again, the results exceeded our expectations: 514 participants in the warmup round, and 389 in the actual contest. 100 participants solved all the problems, and a lot of them even asked for more challenging ones!

To help make Q# and quantum computing more accessible to the public, we also launched self-paced programming tutorials: the Quantum Katas. We’re up to 10 katas already, and more are coming!

Spring, Summer, Autumn

We started planning the next major release in the spring of 2018, after shipping our 0.2 release: we wanted to rebuild our compiler to work as a language server, to give Q# developers the same interactive error checking and IntelliSense features they’re used to for languages like C# and F#. We knew this would be a huge amount of work and would require a significant re-architecture of the compiler in order to work incrementally. We didn’t want to wait longer to do this work, though, because we wanted to give our users the kind of modern programming environment they’re used to.

We spent the spring and summer re-architecting and rewriting the Q# compiler and shipped the new Q# compiler as our 0.3 release at the end of October.

The 0.3 release also includes a new, open source quantum chemistry library. This library integrates with NWChem, a powerful and popular open source computational chemistry package. The integration is based on the open source Broombridge schema.

Whatever Next

What’s next for Q#? No spoilers (yet!).

The last blog post of the calendar, scheduled for December 24th, will look at some of the things we’re considering for Q# in the coming year.

Until then, enjoy the holidays!

Congratulations to everyone who can figure out what the section titles have in common…

Alan Geller, Software Architect, Quantum Software and Applications
@ageller

Alan Geller is a software architect in the Quantum Architectures and Computation group at Microsoft. He is responsible for the overall software architecture for Q# and the Microsoft Quantum Development Kit, as well as other aspects of the Microsoft Quantum software program.

Categories: Q#, Quantum, Visual Studio Tags:

New Benefits in Visual Studio Subscriptions

December 11th, 2018 No comments

Last week at Microsoft Connect();, we announced two new benefits to assist cloud migration for our users who have Visual Studio Subscriptions. If you missed the event or want to watch the on-demand trainings, check out the Connect(); event page. If you’re a current Visual Studio subscriber, activate your new benefits to get started right away. To learn more about our developer subscriptions and programs visit the Visual Studio website.

Here are more details on the two new benefits:

CAST Highlight

Developers need critical insights on their software when migrating to the cloud. With CAST Highlight, Visual Studio Enterprise subscribers can rapidly scan their application source code to identify the cloud readiness of their applications for migration to Microsoft Azure and monitor progress of their app both during and after a migration. Check out this video from CAST to see it in action.

Visual Studio Enterprise subscribers can get a free, full-featured one-month subscription to CAST Highlight for up to five apps per subscriber.

UnifyCloud’s CloudPilot

Developers also need solutions that enable quick and easy app migration to the cloud. CloudPilot helps move apps to Microsoft Azure in a few easy steps, including identifying all required changes down to the line of code for successful migration to containers, virtual machines, App Service, Azure SQL, and SQL Managed Instance. See this video from UnifyCloud to learn more about CloudPilot.

Visual Studio Enterprise subscribers are eligible for two 90-day free licenses to the full-featured CloudPilot, while Visual Studio Professional subscribers can take advantage of one 30-day license to scan apps and databases of millions of lines of code in minutes.

Log into the Visual Studio Subscriptions portal at https://my.visualstudio.com today to get your new benefits.

Let us know what you want to see with the Visual Studio Subscriptions by sharing your feedback, suggestions, thoughts, and ideas in the comments below!

Lan Kaim, Director of Product Marketing

Lan is a Director on the Azure marketing team where she is responsible for the developer subscription business.

New Preview label for Visual Studio extensions

December 7th, 2018 No comments

Visual Studio extensions can now be marked with a Preview label which is shown very clearly on the Visual Studio Marketplace. This gives your customers clear expectations that this version could contain issues as you are actively developing new features. You can get feedback from your users earlier, test out new code changes to improve your extension, and continue to provide a stable version for your users that may require it.

Determining the quality of an extension is today an exercise left to the extension user. If an extension isn’t recommended by a friend, coworker or other trusted person, all we can do is to look at the number of downloads, the star rating, and reviews to determine if we perceive the quality to be high enough for us to try out the extension.

The new preview label explicitly communicates some important details to the consumer about what to expect from the extension. This helps them to understand that the extension may not be feature complete and may contain bugs. It can also help to communicate to them that feedback is welcome to improve the extension before its final release and the preview label removed.

If the version of your extension is less than one (i.e. v0.5) then it will likely benefit from adding the preview label.

Add the preview label

In your extension’s .vsixmanifest file, add the new <Preview> element to the <Metadata> node.

<Metadata>
  <Identity Id="[guid]" Version="0.8" Language="en-US" Publisher="My name" />
  <DisplayName>Extension name</DisplayName>
  <Description>Extension description.</Description>
  <Icon>Resources\Icon.png</Icon>
  <Preview>true</Preview>
</Metadata>

Now upload your extension to the Visual Studio Marketplace to see the new preview label show up. If your extension isn’t uploaded but referenced by a link, then there is a checkbox you can check to add the preview label when you edit your linked extension on the Marketplace.

It’s important to note that the extension will not change behavior in any way due to the addition of the preview label.

Try it out

If you have any applicable extensions, then use the preview label to better communicate the right expectations to your users and get a better chance at higher ratings.

Mads Kristensen, Senior Program Manager
@mkristensen

Mads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he’s written some of the most popular ones with millions of downloads.

CISO series: Strengthen your organizational immune system with cybersecurity hygiene

December 6th, 2018 No comments

One of the things I love about my job is the time I get to spend with security professionals, learning firsthand about the challenges of managing security strategy and implementation day to day. There are certain themes that come up over and over in these conversations. My colleague Ken Malcolmson and I discussed a few of them on the inaugural episode of the Microsoft CISO Spotlight Series: CISO Lessons Learned. Specifically, we talked about the challenges CISOs face migrating to the cloud and protecting your organizations data. In this blog, I dig into one of the core concepts we talked about: practicing cybersecurity hygiene.

Hygiene means conditions or practices conducive to maintaining health. Cybersecurity hygiene is about maintaining cyberhealth by developing and implementing a set of tools, policies, and practices to increase your organization’s resiliency in the face of attacks and exploits. Healthy habits like drinking lots of water, walking every day, and eating a rainbow of vegetables build up the immune system, so our bodies can fight off viruses with minimal downtime. Most of the time we dont even realize how powerful the protection of these behaviors are until that day deep in January when you look around your office and realize you are one of the only people who isnt sick. Thats what cybersecurity hygiene does; it strengthens your organizational immune system. Its a simple concept until you start thinking about the last time you resolved to start practicing healthy habits but were skipping the salad by day three because big salads make your stomach bloat and youd rather have a candy bar anyway.

Success starts with strategy

No matter where in the world I am, CSOs and CISOs tell me their days are filled with fire drills and crises that consume attention and resources but dont help advance a strategic agenda. A little like that candy bardrawing focus in the present but diverting energy from long-term goals. In the precious moments of downtime, when cyber executives can turn attention to long-term strategy and proactive security measures, its not uncommon to have those goals diverted in a different waychasing the latest trend that the board is excited about or having to react to failure or a finding from a recent security assessment.

Consistent change changes systems

Our bodies are systemswhen we eat more vegetables our microbiome changes, it becomes easier to digest those veggies and can actually begin craving them. But if you stock the pantry with candy instead of leafy greens, its hard to make a consistent change. For cyberhealth, you need a strategy that works with the strengths of your organization and mitigates its weakness. Its a little like planning to be healthy. If you are social, it can help to enlist a friend in your exercise routine. If you work late, you can buy prepared, healthy food, so you arent as tempted to grab that candy bar after a long day.

To implement good security practices, take some time to understand your budget, your priorities, and your greatest vulnerabilities and allocate your money appropriately. Create strategic cybersecurity targets and goals for the next one, three, and five years and engage the C-Suite and board in the approvals. You will feel more empowered in conversations with the C-Suite when you have a good rationale and a solid plan and when cybersecurity hygiene becomes a systemic part of the organization, the healthy system will start to crave it.

Practice good cybersecurity hygiene

Once you have a strategy, you are ready to institute some best practices. We recommend getting started with the following to all our clients, big and small:

  • Back up data: Make sure you have a regular process to back up your data to a location separate from your production data and encrypt it in transit and at rest.
  • Implement identities: A good identity and access management solution allows you to enable a single common identity across on-premises and cloud resources with added safeguards to protect your most privileged accounts.
  • Deploy conditional access: Use conditional access to control access based on location, device, or other risk factors.
  • Use Multi-Factor Authentication: Multi-Factor Authentication works on its own or in conjunction with conditional access to verify that users trying to access your resources are who they say they are.
  • Patching: A strategy to ensure all of your software and hardware is regularly patched and updated is important to reduce the number of security vulnerabilities that a hacker can exploit.

Develop cybersecurity hygiene with industry security frameworks

Excited to build healthy cyber habits but not sure where to start? The National Institute of Standards and Technology (NIST) Cybersecurity Framework is a great place to start. You can also download blueprints that will help you implement Microsoft Azure according to NIST standards.

The Center for Information Security (CIS) is a non-profit organization that helps organizations protect themselves from cybercrime. Review the CIS Microsoft Azure Foundations benchmark, which provides recommended steps to securely implement Azure.

Stay healthy, eat your cyber vegetables, and stay up to date by watching our Microsoft CISO Spotlight Series: CISO Lessons Learned, and your organization can build the resiliency to take on any threat.

The post CISO series: Strengthen your organizational immune system with cybersecurity hygiene appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Visual Studio Live Share for real-time code reviews and interactive education

December 6th, 2018 No comments

Collaborating with your team using Visual Studio Live Share keeps getting easier! Since making Live Share available for the public use at BUILD last May, we’ve heard so much great feedback from our users, which has helped guide us in continuing to build a tool that truly enables developers to collaborate in all the ways they need from the comfort of their favorite tools. Your feedback has pointed us towards new collaboration scenarios that we had not previously thought of (e.g. technical interviews and hackathons), as well helped us prioritize releasing some of the biggest feature requests and issues, like sharing multi-root workspaces, making it easier to visually interact with a Live Share session, as well as allowing guests to start a debugging session. We’ve come a long way, and we have so much more collaboration goodness to bring you! If you haven’t checked out Live Share yet, get started collaborating today!

Live Share + Visual Studio 2019

After installing the Visual Studio 2019 Preview, you’ll immediately notice a Live Share button in the top-right corner of your IDE. Starting with Visual Studio 2019, Live Share is now installed by default with most workloads, making it easier than ever to start collaborating with your teammates.

For a quick overview of how to get started with Live Share, we have a walkthrough for you to watch:

We’ve also been working to bring features to enable better Live Share support for Visual Studio users. For our public preview release, we announced universal language support for all languages and platforms. We’re happy to say we’ve continued to expand this capability and have recently added language support for more languages, including: C++, VB.NET, and Razor, with F# and Python on the way soon.

Additionally, one of the top feature requests from Visual Studio users of Live Share was enabling Solution View for guests. Now, you will see the project-based view of the codebase, rather than the “folder view”. The guests and host no longer have different views into the project, and now have the same view, as if they were all developing locally.

Real-time code reviews

Another big area of collaboration among teams comes when committing your code and conducting reviews. Live Share wants to further enhance that experience and offer new ways to work with your teammates.

When a host shares their code during a Live Share session, guests will have access to the shared source control diffs. Available in both Visual Studio (with the Pull Requests for Visual Studio extension), and Visual Studio Code, guests can view the diffs to get context on what changes have been made before or during a session. This can help with having real-time code reviews in your tool of choice, or even figuring out a merge conflict.

Another aspect of the reviewing experience is comments. For that, Live Share enables in-line commenting. While in a session, participants can add comments to code for others to see in real-time. You can use this for making notes about certain changes found in a shared diff or making a to-do list of things to accomplish during the collaboration session.

To further enhance your code review experience, and ensure you can use the tools you’re already familiar with, we’re excited to announce that GitLens has added support for Live Share! As a guest, you can visualize code authorship with Git blame annotations, navigate through line/file/repo history, and view diffs between arbitrary baselines (e.g commits, branches, or tags).

Collaboration comes in so many different forms, so working with the extension ecosystem to holistically address the various ways developers work together is integral to Live Share. We’ve also partnered with other 3rd party extensions to augment Live Share collaboration sessions with their additional capabilities, like auto-sharing servers created by Live Server, sharing Test Explorer results with guests, and letting guests execute code as it is written with Quokka.js.

Interactive Education

The primary goal of Visual Studio Live Share is to enable developers to collaborate with each other more easily, and education is a scenario we care deeply about. Whether you’re mentoring a developer on your team, or giving a lecture to a classroom, Live Share provides participants with an experience that is more engaging and truly personalized to everyone’s learning needs.

While Live Share was already applicable to education, we’ve specifically addressed the following key feedback items, in order to ensure it’s further optimized for the diverse needs of instructors and students everywhere:

Get Collaborating!

If you haven’t had a chance, give Visual Studio Live Share a try! The extension is also available for download for Visual Studio 2017 and Visual Studio Code users. Additionally, it available as a default install option in the new Visual Studio 2019 Preview.

For more information about using Live Share, please check out our docs! We’re so grateful for all the amazing feedback we’ve received from you, and love hearing more. We talked a about a few new use cases we’ve optimized for based on your feedback, and we’re so excited for all the new ways we can enhance how you collaborate. As always, feel free to send us your feedback by filing issues and feature requests on GitHub.

Happy Collaborating!

Jon Chu, Program Manager
@jonwchuJon is a Program Manager on the Visual Studio Live Share team bringing collaboration tools to developers, enabling them to tell their own unique stories. Prior to Live Share he’s worked on XAML tooling and NuGet.

Step 1. Identify users: top 10 actions to secure your environment

December 5th, 2018 No comments

This series outlines the most fundamental steps you can take with your investment in Microsoft 365 security solutions. Well provide advice on activities such as setting up identity management through active directory, malware protection, and more. In this post, we explain how to create a single common identity across on-premises and cloud with hybrid authentication.

Establishing a single, common identity for each user is the foundations step to your cybersecurity strategy. If you currently have an on-premises footprint, this means connecting your Azure Active Directory (Azure AD) to your on-premises resources. There are various requirements and circumstances that will influence the hybrid identity and authentication method that you choose, but whether you choose federation or cloud authentication, there are important security implications for each that you should consider. This blog walks you through our recommended security best practices for each hybrid identity method.

Set up password hash synchronization as your primary authentication method when possible

Azure AD Connect allows your users to access on-premises resources including Azure, Office 365, and Azure AD-integrated SaaS apps using one identity. It uses your on-premises Active Directory as the authority, so you can use your own password policy, and Azure AD Connect gives you visibility into the types of apps and identities that are accessing your company resources. If you choose Azure AD Connect, Microsoft recommends that you enable password hash synchronization (Figure 1) as your primary authentication method. Password hash synchronization synchronizes the password hash in your on-premises Active Directory to Azure AD. It authenticates in the cloud with no on-premises dependency, simplifying your deployment process. It also allows you to take advantage of Azure AD Identity Protection, which will alert you if any of the usernames and passwords in your organization have been sold on the dark web.

Figure 1. Password hash sync synchronizes the password hash in your on-premises Active Directory to Azure AD.

Enable password hash synchronization as a backup during on-premises outages

If your authentication requirements are not natively supported by password hash synchronization, another option available through Azure AD Connect is pass-through authentication (Figure 2). Pass-through authentication provides a simple password validation for Azure AD authentication services by using a software agent that runs on one or more on-premises servers. Since pass-through authentication relies on your on-premises infrastructure, your users could lose access to both Active Directory-connected cloud resources and on-premises resources if your on-premises environment goes down. To limit user downtime and loss of productivity, we recommend that you configure password hash synchronization as a backup. This allows your users to sign in and access cloud resources during an on-premises outage. It also gives you access to advanced security features, like Azure Directory Identity Protection.

Figure 2. Pass-through authentication provides a simple password validation for Azure AD authentication services.

Whether you implement password hash synchronization as your primary authentication method or as a backup during on-premises outages, you can use the Active Directory Federation Services (AD FS) to password hash sync deployment plan as a step-by-step guide to walk you through the implementation process.

Implement extranet lockout if you use AD FS

AD FS may be the right choice if your organization requires on-premises authentication or if you are already invested in federation services (Figure 3). Federation services authenticates users and connects to the cloud using an on-premises footprint that may require several servers. To ensure your users and data are as secure as possible, we recommend two additional steps.

First, enable password hash synchronization as a backup authentication method to get access to Azure AD Identity Protection and minimize interruptions if an outage should occur. Second, we recommend you implement extranet lockout. Extranet lockout protects against brute force attacks that target AD FS, while preventing users from being locked out of Active Directory. If you are using AD FS running on Windows Server 2016, set up extranet smart lockout. For AD FS running on Windows Server 2012 R2AD, youll need to turn on extranet lockout protection.

Figure 3. Federation services authenticates users and connects to the cloud using an on-premises footprint.

You can use the AD FS to pass-through authentication deployment plan as a step-by-step guide to walk you through the implementation process.

Learn more

Check back in a few weeks for our next blog post, Step 2. Manage authentication and safeguard access. In this post well dive into additional protections you can apply to your identities to ensure that only authorized people access the appropriate data and apps.

Get deployment help now

FastTrack for Microsoft 365 provides end-to-end guidance to set up your security products. FastTrack is a deployment and adoption service that comes at no charge with your subscription. Get started at FastTrack for Microsoft 365.

Resources

The post Step 1. Identify users: top 10 actions to secure your environment appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Visual Studio IntelliCode supports more languages and learns from your code

December 5th, 2018 No comments

At Build 2018, we announced Visual Studio IntelliCode, a set of AI-assisted capabilities that improve developer productivity. IntelliCode includes features like contextual IntelliSense code completion recommendations, code formatting, and style rule inference.

IntelliCode has just received some major updates that make its context-sensitive AI-assisted IntelliSense recommendations even better. You can download the updated IntelliCode Extension for Visual Studio and IntelliCode Extension for Visual Studio Code today! The Visual Studio extension already works with the newly-release Visual Studio 2019 Preview 1.

AI-assisted IntelliSense recommendations based on your language of choice

Many of you have requested IntelliCode recommendations for your favorite languages. With this update, we’re excited to add four more languages to the list that can get AI-assisted IntelliSense recommendations. In our extension for Visual Studio, C++ and XAML now get IntelliCode alongside existing support for C#. In our extension for Visual Studio Code, TypeScript/JavaScript and Java are added alongside existing support for Python.

We’ll be sharing more details about IntelliCode’s support for each language on their respective blogs [C++ | TypeScript and JavaScript | Java]

AI-assisted IntelliSense for C# with recommendations based on your own code

Until now, IntelliCode’s recommendations have been based on learning patterns from thousands of open source GitHub repos. But what if you’re using code that isn’t in that set of repos? Perhaps you use a lot of internal utility and base class libraries, or domain-specific libraries that aren’t commonly used in open source code, and would like to see IntelliCode recommendations for them too? If you’re using C#, you can now have IntelliCode learn patterns and make recommendations based on your own code!

When you open Visual Studio after installing the updated IntelliCode Extension for Visual Studio, you’ll see a prompt that lets you know about training on your code, and will direct you to the brand new IntelliCode page to get started. You can also find the new page under View > Other Windows > IntelliCode.  Once training is done, we’ll let you know about the top classes we found usage for, so you can just open a C# file and start typing to try out the new recommendations. We keep the trained models secured, so only you and those who have been given your model’s sharing link can access them–so your model and what it’s learned about your code stay private to you. See our FAQ for more details.

Check out Allison’s video below to see how this new feature works.

Get Involved

As you can see, IntelliCode is growing new capabilities fast. Get the IntelliCode Extension for Visual Studio and the IntelliCode Extension for Visual Studio Code to try right away, and let us know what you think.  You can also find more details about the extensions in our FAQ.

IntelliCode and its underlying service are in preview at present. If you hit issues using the new features and you’re using Visual Studio, use the built-in Visual Studio “Report a Problem” option, and mention IntelliCode in your report. If you’re a Visual Studio Code user, just head over to our GitHub issues page and report your problem there.

If you want to learn more or keep up with the project as we expand the capabilities to more scenarios and other languages, please sign up for email updates. Thanks!

Mark Wilson-Thomas, Senior Program Manager

@MarkPavWT  #VSIntelliCode

Mark is a Program Manager on the Visual Studio IntelliCode team. He’s been building developer tools for over 10 years. Prior to IntelliCode, he worked on the Visual Studio Editor, and on tools for Office, SQL, WPF and Silverlight.

Making every developer more productive with Visual Studio 2019

December 4th, 2018 No comments

Today, in the Microsoft Connect(); 2018 keynote, Scott Guthrie announced the availability of Visual Studio 2019 Preview 1. This is the first preview of the next major version of Visual Studio. In this Preview, we’ve focused on a few key areas, such as making it faster to open and work with projects stored in git repositories, improving IntelliSense with Artificial Intelligence (AI) (a feature we call Visual Studio IntelliCode), and making it easier to collaborate with your teammates by integrating Live Share. With each preview, we’ll be adding capabilities, improving performance, and refining the user experience, and we absolutely want your feedback.

For a quick overview of the new functionality, you can keep reading this blog, or if you want a video overview, check out our team member Allison’s introduction to Visual Studio 2019. But before you do either, make sure to kick off the download.

Enabling you to focus on your work

Right off the bat, you’ll notice that Visual Studio 2019 opens with a new start window on launch. This experience is better designed to work with today’s Git repositories – whether local repos or online Git repos on GitHub, Azure Repos, or elsewhere. Of course, you can still open an existing project or solution or create a new one. (This experience is also coming soon to Visual Studio 2019 for Mac.) We’ll have a more detailed blog post on the new start window experience next week, which will also go into some of the research that supported this revamp.

Visual Studio 2019 start window

Visual Studio 2019 for Mac start window

Once you’re in the IDE, you’ll notice a few changes to the UI and UX of Visual Studio 2019. Jamie Young recently published a blog post with more detail on these changes, but to recap, they include a new product icon, a refreshed blue theme with small changes across the UI to create a cleaner interface, and a more compact title and menu bar – for which we’ve heard your feedback loud and clear and are working to further optimize.

In addition to the enhancements Jamie mentions, today we’re sharing the new search experience in Visual Studio 2019, which replaces the existing “Quick Launch” box. You can now search for settings, commands, and install options. The new search experience is also smarter, as it supports fuzzy string searching to help find what you are looking for even when misspelled.

The new search experience in Visual Studio 2019

When you’re coding, Visual Studio 2019 makes it easier to get your work done quickly.  We’ve started by focusing on code maintainability and consistency experiences in this preview. We’ve added new refactoring capabilities – such as changing for-loops to LINQ queries and converting tuples to named-structs – to make it even easier to keep your code in good shape. With the new document health indicator and code clean-up functionality, you can now easily identify and fix warnings and suggestions with the click of a button.

The document health indicator and code clean-up command

Common debugging task are also easier. You’ll immediately see that stepping performance is improved, allowing for a much smoother debugging experience. We’ve also added search capabilities to the Autos, Locals, and Watch windows helping you track down objects and values. Watch for a future blog post that goes deeper into the debugger improvements in Visual Studio 2019, including the new Time Travel Debugging for managed code feature (coming to a future Preview), updates to the Snapshot Debugger to target Azure Kubernetes Service (AKS) and Virtual Machine Scale Sets (VMSS), and better performance when debugging large C++ projects; thanks to an out-of-process 64-bit debugger.

Search in the Watch window

Helping your team work together

Building on the work we started in Visual Studio 2017, we’re improving Visual Studio IntelliCode, our context-aware and AI-powered IntelliSense, to enable training it on your own code repositories and share the results with your team. IntelliCode reduces the number of keystrokes you need since the completion lists are prioritized on the most common coding patterns for that API combined with the context of the code in your existing project. We’ll have a blog post on all the improvements in IntelliCode coming later this week, including more details on learning from your code, and C++ and XAML support being added for Visual Studio 2019.

Visual Studio IntelliCode using a trained model

Earlier this year, we introduced Visual Studio Live Share to help you collaborate in real-time with anyone across the globe using Visual Studio or Visual Studio Code. Live Share is installed by default with Visual Studio 2019, so you can immediately invite your teammates to join your coding session to take care of a bug or help make a quick change. You’ll also find it’s easier to start a session and view who you’re working within a dedicated space at the top-right of the user interface. We’ll also have a deeper-dive post on Visual Studio Live Share improvements in the next few days, including support for any project, app type, and language, Solution View for guests, and support for more collaboration scenarios.

Visual Studio Live Share integrated in Visual Studio 2019

Last, we’re introducing a brand-new pull request (PR) experience in Visual Studio 2019, which enables you to review, run, and even debug pull requests from your team without leaving the IDE. We support code in Azure Repos today but are going to expand to support GitHub and improve the overall experience. To get started, you can download the Pull Requests for Visual Studio extension from the Visual Studio Marketplace.

The new pull request experience in Visual Studio 2019

.NET Core 3 Preview 1

We also announced .NET Core 3 Preview 1 today, and Visual Studio 2019 will be the release to support building .NET Core 3 applications for any platform. Of course, we also continue to support and improve cross-platform C++ development, as well as .NET mobile development for iOS and Android with Xamarin.

.NET Core 3.0 development in Visual Studio 2019

Help us build the best Visual Studio yet

We are very thankful to have such an active community and can’t wait to hear what you think about Visual Studio 2019. Please help us make this the best Visual Studio yet by letting us know of any issues you run into by using the Report a Problem tool in Visual Studio. You can also head over to the Visual Studio Developer Community to track your issue or, even better, suggest a feature, ask questions, and find answers from others.

We will share more about the full feature set and SKU lineup of Visual Studio 2019 in the coming months as we release more previews. You can try Visual Studio 2019 side-by-side with your current installation of Visual Studio 2017, or if you want to give it a spin without installing it, check out the Visual Studio images on Azure.

I also want to take a moment to thank our vibrant extension ecosystem, who have made over 400 extensions available for Visual Studio 2019 Preview 1 already, and more are being added each day. You can find these extensions on the Visual Studio Marketplace.

Microsoft has always been a company with developers at the heart – we’re humbled that the community of users of the Visual Studio family has surpassed 12 million. We aim to make every second you spend coding more productive and delightful. Please continue to share your feedback on the preview for Visual Studio 2019 to help guide the future direction of the product so it becomes your favorite tool. Thank you!

John Montgomery, Director of Program Management for Visual Studio
@JohnMontJohn is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, JavaScript, and .NET. John has been at Microsoft for 17 years, working in developer technologies the whole time.

Insights from the MITRE ATT&CK-based evaluation of Windows Defender ATP

In MITREs evaluation of endpoint detection and response solutions, Windows Defender Advanced Threat Protection demonstrated industry-leading optics and detection capabilities. The breadth of telemetry, the strength of threat intelligence, and the advanced, automatic detection through machine learning, heuristics, and behavior monitoring delivered comprehensive coverage of attacker techniques across the entire attack chain.

MITRE tested the ability of products to detect techniques commonly used by the targeted attack group APT3 (also known as Boron or UPS). To isolate detection capabilities, as part of the testing, all protection and prevention features were turned off. In the case of Windows Defender ATP, this meant turning off blocking capabilities like hardware-based isolation, attack surface reduction, network protection, exploit protection, controlled folder access, and next-gen antivirus. The test showed that, by itself, Windows Defender ATPs EDR component is one of the most powerful detection and investigation solutions in the market today.

Microsoft is happy to be one of the first EDR vendors to sign up for the MITRE evaluation based on the ATT&CK framework, widely regarded today as the most comprehensive catalog of attacker techniques and tactics. MITRE closely partnered with participating security vendors in designing and executing the evaluation, resulting in a very collaborative and productive testing process.
We like participating in scientific and impartial tests because we learn from them. Learning from independent tests, like listening to customers and conducting our own research, is part of our goal to make sure that Windows Defender ATP is always ahead of threats and continues to evolve.

Overall, the results of the MITRE evaluation validated our investments in continuously enriching Windows Defender ATPs capabilities to detect and expose attacker techniques. Below we highlight some of the acute attacker techniques that Windows Defender ATP effectively detected during the MITRE testing.

Deep security telemetry and comprehensive coverage

Windows Defender ATP showed exceptional capabilities for detecting attacker techniques through APT3s attack stages, registering the lowest number of misses among evaluated products. Throughout the emulated attack chain, Windows Defender ATP detected the most critical attacker techniques, including:

  • Multiple discovery techniques (detected with Suspicious sequence of exploration activities alert)
  • Multiple process injection attempts for privilege escalation, credential theft, and keylogging/screen capture
  • Rundll32.exe being used to execute malware
  • Credential dumping from LSASS
  • Persistence via Scheduled Task
  • Keylogging (both in Cobalt Strike and PS Empire)
  • Brute force login attempts
  • Accessibility features attack (abusing sticky keys)
  • Lateral movement via remote service registration

Windows Defender ATP correlates security signals across endpoints and identities. In the case of the APT3 emulation, signals from Azure Advanced Threat Protection helped expose and enrich the detection of the account discovery behavior. This validates the strategic approach behind Microsoft Threat Protection: the most comprehensive protection comes from sharing rich telemetry collected from across the entire attack chain.

Windows Defender ATPs Antimalware Scan Interface (AMSI) sensors also proved especially powerful, providing rich telemetry on the latter stages of the attack emulation, which made heavy use of malicious PowerShell scripts. This test highlighted the value of transparency: the AMSI interface enabled deep visibility into the PowerShell used in each attacker technique. Advanced machine learning-based detection capabilities in Windows Defender ATP use this visibility to expose malicious scripts.

Stopping attacks in the real world with Windows Defender ATPs unified endpoint security platform

The MITRE results represent EDR detection capabilities, which surface malicious and other anomalous activities. In actual customer environments, Windows Defender ATPs preventive capabilities, like attack surface reduction and next-gen protection capabilities, would have blocked many of the attack techniques at the onset. In addition, investigation and hunting capabilities enable security operations personnel to correlate alerts and incidents to enable holistic response actions and build wider protections.

Windows Defender ATP’s best-in-class detection capabilities, as affirmed by MITRE, is amplified across Microsoft solutions through Microsoft Threat Protection, a comprehensive, integrated protection for identities, endpoints, user data, cloud apps, and infrastructure. To run your own evaluation of how Windows Defender ATP can help protect your organization and let you detect, investigate, and respond to advanced attacks, sign up for a free Windows Defender ATP trial.

 

 

 

Windows Defender ATP team

 

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

 

 

The post Insights from the MITRE ATT&CK-based evaluation of Windows Defender ATP appeared first on Microsoft Secure.

Categories: APT3, ATT&CK, BORON, cybersecurity, MITRE Tags:

Kicking off the Microsoft Graph Security Hackathon

December 3rd, 2018 No comments

Cybersecurity is one of the hottest sectors in tech with Gartner predicting worldwide information security spending to reach $124 billion by the end of 2019. New startups and security solutions are coming onto the market while attackers continue to find new ways to breach systems. The security solutions market has grown at a rapid pace as a result. Our customers face immense challenges in integrating all these different solutions, tools, and intelligence. Oftentimes, the number of disconnected solutions make it more difficultrather than easierto defend and recover from attacks.

We invite you to participate in the Microsoft Graph Security Hackathon for a chance to help solve this pressing challenge and win a piece of the $15,000 cash prize pool.* This online hackathon runs from December 1, 2018 to March 1, 2019 and is open to individuals, teams, and organizations globally.

The Microsoft Graph Security API offers a unified REST endpoint that makes it easy for developers to bring security solutions together to streamline security operations and improve cyber defenses and response. Tap into other Microsoft Graph APIs as well as mash up data and APIs from other sources to extend or enrich your scenarios.

Prizes

In addition to learning more about the Microsoft Graph and the security API, the hackathon offers these awesome prizes for the top projects:

  • $10,000 cash prize for the first-place solution, plus a speaking opportunity at Build 2019.
  • $3,000 cash prize for the runner up solution.
  • $2,000 cash prize for the popular choice solution, chosen via public voting.

In addition, all three winning projects, and the individuals or teams in the categories above, will be widely promoted on Microsoft blog channelsgiving you the opportunity for your creative solutions to be known to the masses. The criteria for the judging will consist of the quality of the idea, value to the enterprise, and technical implementation. You can find all the details you need on the Microsoft Graph Security Hackathon website.

Judging panel

Once the hackathon ends on March 1, 2019, judging commences immediately after by our amazing judges. Well announce the winners on or before April 1, 2019. The hackathon will be judged by a panel of Microsoft and non-Microsoft experts and influencers in the developer community and in cybersecurity, including:

  • Ann Johnson, Corporate Vice President for Cybersecurity Solutions Group for Microsoft
  • Scott Hanselman, Partner Program Manager for Microsoft
  • Mark Russinovich, CTO Azure for Microsoft
  • Rick Howard, Chief Security Officer Palo Alto Networks

We will announce more judges in the coming weeks!

Next steps

Let the #graphsecurityhackathon begin

*No purchase necessary. Open only to new and existing Devpost users who are the age of majority in their country. Game ends March 1, 2019 at 5:00 PM Eastern Time. For details, see the official rules.

The post Kicking off the Microsoft Graph Security Hackathon appeared first on Microsoft Secure.

Categories: cybersecurity Tags:

Analysis of cyberattack on U.S. think tanks, non-profits, public sector by unidentified attackers

December 3rd, 2018 No comments

Reuters recently reported a hacking campaign focused on a wide range of targets across the globe. In the days leading to the Reuters publication, Microsoft researchers were closely tracking the same campaign.

Our sensors revealed that the campaign primarily targeted public sector institutions and non-governmental organizations like think tanks and research centers, but also included educational institutions and private-sector corporations in the oil and gas, chemical, and hospitality industries.

Microsoft customers using the complete Microsoft Threat Protection solution were protected from the attack. Behavior-based protections in multiple Microsoft Threat Protection components blocked malicious activities and exposed the attack at its early stages. Office 365 Advanced Threat Protection caught the malicious URLs used in emails, driving the blocking of said emails, including first-seen samples. Meanwhile, numerous alerts in Windows Defender Advanced Threat Protection exposed the attacker techniques across the attack chain.

Third-party security researchers have attributed the attack to a threat actor named APT29 or CozyBear, which largely overlaps with the activity group that Microsoft calls YTTRIUM. While our fellow analysts make a compelling case, Microsoft does not yet believe that enough evidence exists to attribute this campaign to YTTRIUM.

Regardless, due to the nature of the victims, and because the campaign features characteristics of previously observed nation-state attacks, Microsoft took the step of notifying thousands of individual recipients in hundreds of targeted organizations. As part of the Defending Democracy Program, Microsoft encourages eligible organizations to participate in Microsoft AccountGuard, a service designed to help these highly targeted customers protect themselves from cybersecurity threats.

Attack overview

The aggressive campaign began early in the morning of Wednesday, November 14. The targeting appeared to focus on organizations that are involved with policy formulation and politics or have some influence in that area.

Phishing targets in different industry verticals

Although targets are distributed across the globe, majority are located in the United States, particularly in and around Washington, D.C. Other targets are in Europe, Hong Kong, India, and Canada.

Phishing targets in different locations

The spear-phishing emails mimicked sharing notifications from OneDrive and, as noted by Reuters, impersonated the identity of individuals working at the United States Department of State. If recipients clicked a link on the spear-phishing emails, they began an exploitation chain that resulted in the implantation of a DLL backdoor that gave the attackers remote access to the recipients machines.

Attack chain

Analysis of the campaign

Delivery

The spear-phishing emails used in this attack resemble file-sharing notifications from OneDrive.

The emails contain a link to a legitimate, but compromised third-party website:

hxxps://www.jmj.com/personal/nauerthn_state_gov/TUJE7QJl[random string]

The random strings are likely used to identify distinct targeted individuals who clicked on the link. However, all observed variants of this link redirect to a specific link on the same site:

hxxps://www.jmj.com/personal/nauerthn_state_gov/VFVKRTdRSm

When users click the link, they are served a ZIP archive containing a malicious LNK file. All files in a given attack have the same file name, for example, ds7002.pdf, ds7002.zip, and ds7002.lnk.

Installation

The LNK file represents the first stage of the attack. It executes an obfuscated PowerShell command that extracts a base64-encoded payload from within the LNK file itself, starting at offset 0x5e2be and extending 16,632 bytes.

Encoded content in the LNK file

The encoded payloadanother heavily obfuscated PowerShell scriptis decoded and executed:

Decoded second script

The second script carves out two additional resources from within the .LNK file:

  • ds7002.PDF (A decoy PDF)
  • cyzfc.dat (The first stage implant)

Command and control

The first-stage DLL, cyzfc.dat, is created by the PowerShell script in the path %AppData%\Local\cyzfc.dat. It is a 64-bit DLL that exports one function: PointFunctionCall.

The PowerShell script then executes cyzfc.dat by calling rundll32.exe. After connecting to the first-stage command-and-control server at pandorasong[.]com (95.216.59.92), cyzfc.dat begins to install the final payload by taking the following actions:

  1. Allocate a ReadWrite page for the second-stage payload
  2. Extract the second-stage payload as a resource
  3. Take a header that is baked into the first payload with a size 0xEF bytes
  4. Concatenate the header with the resource, starting at byte 0x12A.
  5. De-XOR the second-stage payload with a rolling XOR (ROR1), starting from key 0xC5.

The second stage is an instance of Cobalt Strike, a commercially available penetration testing tool, which performs the following steps:

  1. Define a local named pipe with the format \\.\pipe\MSSE-<number>-server, where <number> is a random number between 0 and 9897
  2. Connecting to the pipe, write it global data with size 0x3FE00
  3. Implement a backdoor over the named pipe:

    1. Read from the pipe (maximum 0x3FE00 bytes) to an allocated buffer
    2. DeXOR the payload onto a new RW memory region, this time with a much simple XOR key: simple XORing every 4 bytes with 0x7CC2885F
    3. Turn the region to be RX
    4. Create a thread that starts running the payload’

The phase that writes to global data to the pipe actually writes a third payload. That payload is XORed with the same XORing algorithm used for reading. When decrypted, it forms a PE file with a Meterpreter header, interpreting instructions in the PE header and moving control to a reflective loader:

The third payload eventually gets loaded and connects to the command-and-control (C&C) server address that is baked-in inside configuration information in the PE file. This configuration information is de-XORed at the third payload runtime:

The configuration information itself mostly contains C&C information:

CobaltStrike is a feature-rich penetration testing tool that provides remote attackers with a wide range of capabilities, including escalating privileges, capturing user input, executing arbitrary commands through PowerShell or WMI, performing reconnaissance, communicating with C&C servers over various protocols, and downloading and installing additional malware.

End-to-end defense through Microsoft Threat Protection

Microsoft Threat Protection is a comprehensive solution for enterprise networks, protecting identities, endpoints, user data, cloud apps, and infrastructure. By integrating Microsoft services, Microsoft Threat Protection facilitates signal sharing and threat remediation across services. In this attack, Office 365 Advanced Threat Protection and Windows Defender Advanced Threat Protection quickly mitigated the threat at the onset through durable behavioral protections.

Office 365 ATP has enhanced phishing protection and coverage against new threats and polymorphic variants. Detonation systems in Office 365 ATP caught behavioral markers in links in the emails, allowing us to successfully block campaign emailsincluding first-seen samplesand protect targeted customers. Three existing behavioral-based detection algorithms quickly determined that the URLs were malicious. In addition, Office 365 ATP uses security signals from Windows Defender ATP, which had a durable behavior-based antivirus detection (Behavior:Win32/Atosev.gen!A) for the second-stage malware.If you are not already secured against advanced cyberthreat campaigns via email, begin a free Office 365 E5 trial today.

Safe Links protection in Office 365 ATP protects customers from attacks like this by analyzing unknown URLs when customers try to open them. Zero-hour Auto Purge (ZAP) actively removes emails post-delivery after they have been verified as maliciousthis is often critical in stopping attacks that weaponize embedded URLs after the emails are sent.

All of these protections and signals on the attack entry point are shared with the rest of the Microsoft Threat Protection components. Windows Defender ATP customers would see alerts related to the detection of the malicious emails by Office 365 ATP, as well the behavior-based antivirus detection.

Windows Defender ATP detects known filesystem and network artifacts associated with the attack. In addition, the actions of the LNK file are detected behaviorally. Alerts with the following titles are indicative of this attack activity:

  • Artifacts associated with an advanced threat detected
  • Network activity associated with an advanced threat detected
  • Low-reputation arbitrary code executed by signed executable
  • Suspicious LNK file opened

Network protection blocks connections to malicious domains and IP addresses. The following attack surface reduction rule also blocks malicious activities related to this attack:

  • Block executable files from running unless they meet a prevalence, age, or trusted list criteria

Through Windows Defender Security Center, security operations teams could investigate these alerts and pivot to machines, users, and the new Incidents view to trace the attack end-to-end. Automated investigation and response capabilities, threat analytics, as well as advanced hunting and new custom detections, empower security operations teams to defend their networks from this attack.To test how Windows Defender ATP can help your organization detect, investigate, and respond to advanced attacks, sign up for a free Windows Defender ATP trial.

The following Advanced hunting query can help security operations teams search for any related activities within the network:

//Query 1: Events involving the DLL container
let fileHash = "9858d5cb2a6614be3c48e33911bf9f7978b441bf";
find in (FileCreationEvents, ProcessCreationEvents, MiscEvents, 
RegistryEvents, NetworkCommunicationEvents, ImageLoadEvents)
where SHA1 == fileHash or InitiatingProcessSHA1 == fileHash
| where EventTime > ago(10d)

//Query 2: C&C connection
NetworkCommunicationEvents 
| where EventTime > ago(10d) 
| where RemoteUrl == "pandorasong.com" 

//Query 3: Malicious PowerShell
ProcessCreationEvents 
| where EventTime > ago(10d) 
| where ProcessCommandLine contains 
"-noni -ep bypass $zk=' JHB0Z3Q9MHgwMDA1ZTJiZTskdmNxPTB4MDAwNjIzYjY7JHRiPSJkczcwMDIubG5rIjtpZiAoLW5vdChUZXN0LVBhdGggJHRiKSl7JG9lPUdldC1DaGlsZEl0" 

//Query 4: Malicious domain in default browser commandline
ProcessCreationEvents 
| where EventTime > ago(10d) 
| where ProcessCommandLine contains 
"https://www.jmj.com/personal/nauerthn_state_gov" 

//Query 5: Events involving the ZIP
let fileHash = "cd92f19d3ad4ec50f6d19652af010fe07dca55e1";
find in (FileCreationEvents, ProcessCreationEvents, MiscEvents, 
RegistryEvents, NetworkCommunicationEvents, ImageLoadEvents)
where SHA1 == fileHash or InitiatingProcessSHA1 == fileHash
| where EventTime > ago(10d)

The provided queries check events from the past ten days. Change EventTime to focus on a different period.

 

 

 

Windows Defender Research team, Microsoft Threat Intelligence Center, and Office 365 ATP research team

 

 

 

Indicators of attack

Files (SHA-1)

  • ds7002.ZIP: cd92f19d3ad4ec50f6d19652af010fe07dca55e1
  • ds7002.LNK: e431261c63f94a174a1308defccc674dabbe3609
  • ds7002.PDF (decoy PDF): 8e928c550e5d44fb31ef8b6f3df2e914acd66873
  • cyzfc.dat (first-stage): 9858d5cb2a6614be3c48e33911bf9f7978b441bf

URLs

  • hxxps://www.jmj[.]com/personal/nauerthn_state_gov/VFVKRTdRSm

C&C servers

  • pandorasong[.]com (95.216.59.92) (first-stage C&C server)

 

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

 

 

The post Analysis of cyberattack on U.S. think tanks, non-profits, public sector by unidentified attackers appeared first on Microsoft Secure.

Qubits in Q#

December 1st, 2018 No comments

How should qubits be represented in a quantum programming language?

In the quantum circuit model, a quantum computation is represented as a sequence of operations, sometimes known as gates, applied to a set of qubits. This leads to pictures such as:

Teleportation quantum circuit

from Quantum Computation and Quantum Information by Nielsen and Chuang

In this picture, each horizontal line is a qubit, each box is an operation, and time flows from left to right.

When we want to design a programming language to express a quantum computation, the question naturally arises of whether qubits should be represented in the language, and if so, how. In the most naive model of such a picture, there would be a software entity that represented each horizontal line, with little or no accessible state other than perhaps a label, but there are other possibilities.

Quantum States as Linear Types

Quipper uses linear types to model quantum states, although it refers to the data type as Qubit. The intuition is that the no-cloning theorem prevents you from making a copy of the quantum state of a qubit, so it makes sense to use the type system to prevent you from making a copy of the software representation of quantum state. Linear types have proven useful in other contexts where copying the software representation of an entity would be bad, for instance in functional concurrent programming, so why not for quantum computing?

The argument for linear types is, as mentioned, that they reflect the no-cloning theorem in the language. The implication is that a software qubit represents a qubit state, rather than an actual object. The problem with this view is that, once entangled, it no longer makes sense to talk about the state of an individual qubit; that’s more or less exactly what being entangled means. To model this in software, an entangling gate such as a CNOT should take two single-qubit states as inputs and return a software entity that represents two-qubit state as output — as long as you never want to perform a CNOT on two qubits that are each entangled with other qubits.

The software entity as quantum state abstraction breaks down for two reasons:

  • because quantum computing works by applying operations to physical entities, not to quantum states, so the abstraction doesn’t correspond to operational reality; and
  • because in general there is no actual physical quantum state smaller than the state of the entire quantum computer, so the abstraction doesn’t correspond to physical reality.

Qubits as Opaque References

An alternate approach is to use an opaque data type that represents a reference to a specific two-state quantum system, whether physical or logical (error-corrected), on which operations such as `H` or `X` may be performed. This is an operational view of qubits: qubits are defined by what you can do to them. Both OpenQASM and Q# follow this model.

Quantum Computing is Computing by Side Effect

The representation used in Q# has the interesting implication that all of the actual quantum computing is done by side effect. There is no way to directly interact with the quantum state of the computer; it has no software representation at all. Instead, one performs operations on qubit entities that have the side effect of modifying the quantum state. Effectively, the quantum state of the computer is an opaque global variable that is inaccessible except through a small set of accessor primitives (measurements) — and even these accessors have side effects on the quantum state, and so are really “mutators with results” rather than true accessors.

In general programming, the use of side effects and global state is generally discouraged. For quantum computing, on the other hand, they seem to match the actual physical reality pretty well. For this reason, we decided that this abstraction was the right one to use in Q#.

This post is the first one in the Q# Advent Calendar 2018. Follow the calendar for other great posts!

Alan Geller, Software Architect, Quantum Software and Applications
@ageller

Alan Geller is a software architect in the Quantum Architectures and Computation group at Microsoft. He is responsible for the overall software architecture for Q# and the Microsoft Quantum Development Kit, as well as other aspects of the Microsoft Quantum software program.

Categories: Q#, Quantum, Visual Studio Tags: