Quantcast
Channel: Home Technet Serveur
Viewing all 2937 articles
Browse latest View live

Microsoft Loves Linux Deep Dive #12: Summary of Running and Managing Linux and FreeBSD in Your Datacenter

$
0
0

This post was written by Michael Kelley, Principal PM Manager, Cloud + Enterprise

Introduction

This blog post is #12 in a series of technical posts about running and managing Linux and FreeBSD in your on-premises datacenter.  Previous posts in the series are here:

Overview

Running Linux and FreeBSD as a guest operating system on Hyper-V

Managing Linux and UNIX using System Center and PowerShell DSC

Microsoft Loves Linux

First, I hope you take away the understanding that Microsoft is committed to Linux and FreeBSD as 1st class guest operating systems in your datacenter.  Maybe you’ve seen the “Microsoft (heart) Linux” tagline. But perhaps these recent tweets best capture the sense that Microsoft’s approach to Linux and open source has changed:

Running Linux and FreeBSD

Second, doing a great job of running Linux and FreeBSD as guest operating systems on Hyper-V is fundamental.  We’ve heard your feedback as customers that your datacenters are heterogeneous.  You run workloads on Windows, you run workloads on Linux, and you probably run some workloads on UNIX.  Being able to run Linux and FreeBSD as a guest on Hyper-V enables you to operate a single infrastructure for both workloads, and not have to bifurcate the underlying hardware, the hypervisor, and the management infrastructure.  The first few posts in this series describe how Microsoft works with the Linux and FreeBSD communities to build the drivers for running these operating systems on Hyper-V.  The posts also describe how high I/O performance is achieved in a virtual environment, and the sophisticated management features, such as online backup and dynamic memory, that accrue to Linux and FreeBSD guests running on Hyper-V.

The capability to run Linux and FreeBSD as guest operating systems on Hyper-V underlies all the different ways you may be organizing your on-premises datacenter.  Maybe you are doing straight virtualization in order to consolidate on less hardware and drive up utilization.  Maybe you are going a step further and operate a true private cloud based on Hyper-V, System Center Virtual Machine Manager, and Windows Azure Pack.  Or maybe you are using the Microsoft Cloud Platform System (CPS), which is an integrated system, combining hardware and software into a private cloud offering that is pre-assembled, pre-installed, and fully tested end-to-end. Finally, as we move into calendar year 2016, the Microsoft Azure Stack offering will provide a fully Azure-compatible private cloud that you can run in your datacenter.  In all of these cases, Linux and FreeBSD run as guest operating systems with Hyper-V as the base hypervisor.  Of course, the same is true for the Azure public cloud as well.

Managing Linux and UNIX

Third, managing Linux and UNIX is equally key.  Regardless of the environment in which you are operating your Linux and UNIX workloads, Microsoft products can provide the OS and workload management, just like with Windows.  You can get a single view across your entire datacenter, even if you are running on physical hardware or a hypervisor other than Hyper-V.  Our customers have been using System Center to manage their Linux and UNIX servers since 2009, and today several hundred thousand such servers are under management with System Center in production environments.

System Center Operations Manager (OpsMgr) provides day-to-day monitoring of the Linux operating system as well as several different Java application servers.  New for OpsMgr in the upcoming 2016 release is monitoring of key open source middleware components such as the Apache web server and the MySQL and MariaDB databases, running on Linux.  OpsMgr monitoring can also be easily extended to cover your specific application and workload needs via custom shell command lines or other scripts running on Linux.  System Center Configuration Manager (ConfigMgr) provides hardware inventory and installed software inventory across all of your Linux instances, and provides software distribution, enabling you to push out new software packages to large groups of Linux servers based on criteria you control.  OpsMgr and ConfigMgr also go beyond just Windows and Linux, providing the same capabilities for major UNIX operating systems.

PowerShell DSC for Linux and Microsoft Operations Management Suite (OMS) are the two newest management products to offer Linux capabilities. PowerShell DSC for Linux gives you a consistent set of tools to define the state of Windows and Linux operating systems, and to detect and remediate configuration drift.  You’ll see additional announcements in this area in the coming months as we continue to do new development work.  Similarly, OMS is one of an increasing set of hybrid offerings that span the public cloud and on-premises operations by offering management of on-premises servers from the cloud.  No on-premises infrastructure is needed.  OMS Log Analytics features are now available for Linux in public preview, enabling collection, analysis, and visualization of performance metrics and syslog data. The OMS Automation Solution enables runbook-based automation for your Windows and Linux environment as well as configuration management with Desired State Configuration. You can expect the functionality offered will grow rapidly in the coming months.

Hybrid:  Azure and On-Premises

Finally, while this blog post series has focused on capabilities for your on-premises datacenter, heterogeneity is equally important in the Azure public cloud.  Indeed, more than 25% of the customer VMs in Azure are running Linux, and that percentage is growing.  A range of services, including IaaS virtual machines, HDInsight (hadoop), and the Azure Container Service are available today in Azure running Linux.  Plus, a huge set of open source tools and services are available to you running in the Azure public cloud.   As your organization considers putting workloads in the public cloud while retaining an on-premises datacenter footprint, the Microsoft hybrid story is unique in its ability to integrate the two worlds and give you flexibility to move between them as suits your business purposes.   Running and managing Linux and FreeBSD in both worlds is a fundamental part of that hybrid story.

Wrap-Up

While this series wraps-up with this post, in 2016 we’ll have additional technical posts about running and managing Linux and UNIX in your datacenter.  For example, a multi-part series on setting up and running Linux operating system clusters using shared VHDX files in a Hyper-V cluster is planned to start in January.  Thanks for reading!


Intelligent Management: How You Take Action with What You Already Have

$
0
0

This is Part 3 of a 3-part blog series based on the new eBook, “Protect Identities, Devices, and Your Company Information in Today’s Device-Centric World.” Check out Part 1 and Part 2.

A lot of cloud and enterprise mobility vendors are going to end a discussion about planning for your future by explaining why a wholesale rip and replace of your existing infrastructure is critical. I think that is a bit drastic.

Instead, in this post, I’m going to look at what you can do to build upon what you already have in order to ensure you can meet the future needs of your growing workforce, as well as be ready to deliver what they need right now.

To put all of this in perspective, let’s look at 3 scenarios and see what a cloud-based control plane built on EMS can do for you:

  • Managed Mobile Productivity
  • End-to-end Information Protection
  • Identity-driven Security

I think it’s important to consider these three elements together, rather than things to be tackled individually. In the examples below, I’ll make the case for why I think they are best addressed as a converged part of your IT processes – especially when approached as cloud-based services.

Managed Mobile Productivity

We all use mobile devices for the very simple reason that they make us so much more productive. But, if these devices can’t be effectively managed, then the risks become far too great – and that tradeoff simply isn’t worth it. With this in mind, it is valuable to think about productivity and effective management as the same goal. What every organization really needs, then, is managed mobile productivity.

This is a topic that is especially near and dear to me – in fact, I begin nearly every meeting (whether internal or external) with this statement about the vision of the EMS team:

clip_image002

To see how EMS makes this possible, let’s first look at how a user (Anna) adds a new iPad (but the same process applies to Windows or Android devices) to the corporate network:

clip_image004

Figure 10: EMS can automatically install management software on a device, then enforce policies for accessing applications.

As I’ve noted on many previous occasions, identity is the foundation for everything else. That’s why this process begins with Anna logging in with Azure AD (step 1). The iPad she’s using might be her own, or it might be one that her organization has provided. In either case, the first thing she does after logging in is try to access a SaaS app. In the example seen in Figure 10, that application is Exchange Online (a part of Office 365) and she wants to access her corporate e-mail. Because her new iPad is currently unmanaged, this request is re-directed to Intune (step 2).

Once directed to Intune, the software is then installed on Anna’s iPad (with her permission, of course) to allow this device to be managed and receive all the policies that have been defined for iPads (as seen in step 3). These policies may specify (based on the discretion of the admins) that in order to be a part of your corporate environment, an iPad must have an unlock password set up, it must encrypt the corporate data it stores, and it will require the user’s email account to be managed. To set up these policies, the admin will rely on both Azure AD and Intune.

Now that Anna’s device is managed, she can successfully access her corporate e-mail (step 4). The final step before the e-mail starts flowing includes Azure AD and Intune working together to ensure that she is 100% compliant with a policy defined for this specific app. For example, an Exchange Online policy might require requests to come from Intune-managed devices that have also applied all the available updates. This is an example of conditional access, where a user is allowed to do something only if several conditions are met: e.g., the right identity, the right kind of device with the right characteristics, and more at the discretion of network admins.

Conditional access is an incredibly powerful feature, and it’s only possible when multiple services work together.

End-to-end Information Protection

Once Anna has access to Exchange Online, she immediately starts receiving her corporate e-mail. Now that she’s up and running, the corporate data on it needs to be protected – no matter where that iPad travels (different offices, airports, Starbucks), – and IT needs a way to stop her from (accidentally or intentionally) sending this information to outsiders. To do this, you need end-to-end information protection.

This kind of end-to-end protection is best provided by EMS through the combined feature sets of Azure AD, Intune, and Azure RMS:

clip_image002

Figure 11: EMS protects corporate information by letting it be used and copied only within a managed environment and by embedding access controls directly into encrypted files.

As seen in Figure 11, if Anna receives a corporate e-mail with an attached Excel spreadsheet (step 1), and she opens this attachment using the Excel mobile app on her iPad, and then tries to copy and paste data from the spreadsheet to the iPad’s built-in Notes app – with EMS in place, this attempt will fail (step 2).

The reason this fails is that Intune effectively separates managed apps on Anna’s iPad from her personal apps. As the Figure 11 shows, Anna’s Office mobile apps are all marked as “Managed,” which means that data from these apps cannot be copied to non-managed apps. In this example, the “Paste” option will not appear when she tries to move data from the Excel spreadsheet to the Notes app. Anna is, of course, free to move information between the managed apps (e.g. from an Excel spreadsheet to a Word document) – but that is all.

Only Microsoft can provide this kind of information protection for the Office mobile apps on iPads and Android devices – absolutely no other MAM vendor can do this. If Anna wants to use the Office mobile apps for both business and personal work, she’s free to do this – all she needs to do is log in with a different identity. Intune will make sure that she can access only her personal data when she’s logged in with a non-corporate identity.

To learn more about how to bring your internal applications into this solution, and to learn more about how Box, Adobe, SAP, Citrix, and others have updated their apps to include the Intune MAM capabilities, check out this recent post about the Intune Application Ecosystem.

Also, although unrelated to security, in this scenario consider that the Excel spreadsheet renders perfectly when it opens. Surprisingly often the Office apps will not properly render on other EMM solutions, and you end up with spreadsheets that look like this. This is actually one of the most common complaints I hear from customers using other EMM solutions – far too often their Office documents wont render and are unusable. You want to use the real Office!

The information protection provided by Intune is essential for mobile devices – but, by itself, it is not enough. Suppose, for example, that Anna receives an e-mail with another attachment containing confidential corporate data (step 3). She may never open this doc on her iPad, but she may accidentally forward it to someone outside the company. In this scenario, even more expansive end-to-end information protection is necessary. Azure RMS was created to solve problems like this.

In the event that the attachment Anna received is protected by Azure RMS, it is encrypted. The document has embedded into it the identities of the users that can access the documents and the rights they have for editing or reading it. If the user attempting to open the document is not one of those users, they simply cannot open the file (step 4). Azure RMS uses Anna’s identity (which is provided via Azure AD), along with information in the protected document itself, to determine what access rights she has to that doc. Even if the doc is in her inbox, the settings on that doc might allow her to only read the document. In the event that doc is forwarded, the external recipient would have absolutely no rights to it and it would be un-openable.

Azure RMS protects information wherever that information might travel. This is a matter of protecting data wherever it goes and wherever it is accessed. We believe you should always be able to control your data – even when it is accessed by devices you don’t control. This is another example of the evaporating “perimeter” that was historically used to protect data. Some data must be mobile to truly be valuable; and when that data is mobile and being shared it is typically outside perimeter. These files must be self-protecting.

When operated together, these two EMS components provide truly end-to-end information protection.

Identity-driven Security

As noted near the beginning of this post, all of the capabilities we’ve seen thus far rely upon identity. Intune uses Anna’s identity to decide what policies to apply to her device, and Azure RMS decides what level of access she should have to as sensitive document. Identity is central to everything a cloud-based platform provides.

The logical next question is this: What happens if an attacker is able to compromise Anna’s identity?

There are countless ways for this to happen: For example, perhaps she is using a password that is easy to guess, or perhaps her credentials are captured through social engineering or a phishing attack. Both of these circumstances are very common – and, once breached, you need tools that can identify compromised accounts and help you block their access. A major source of protection against this is Azure AD’s multi-factor authentication feature.

Detecting and neutralizing this kind of attack requires identity-based security. To put that terminology in context, consider that an attacker using a stolen identity usually behaves differently than the actual owner of that identity. Microsoft’s ability to detect that difference in behavior is how we keep your organization safe. Microsoft Advanced Threat Analytics (ATA) can detect these differences and then alert your security staff to the problem:

Figure 12: Azure AD can warn about several kinds of spurious logins.

Here’s how this scenario plays out: Anna logs into Azure Active Directory (step 1), then works her typical daytime schedule. Because Anna works for the human resources department, she primarily accesses the organization’s HR app and the data associated with it (step 2). But now an attacker logs in as Anna using her stolen credentials (step 3). This attacker immediately acts differently – instead of spending most of her time accessing HR data, she’s going through financial documents and technical research. Also, this hacker has logged in during the middle of the night in Anna’s timezone (step 4).

This variation in behavior can be detected by Microsoft ATA. By monitoring traffic in and out of your on-premises Active Directory, then using machine learning technology to analyze this traffic, ATA can quickly learn the usual access patterns of your users and spot a user deviating from those patterns by alerting your security staff to the possible breach (step 5).

Also, with Azure Active Directory Premium, you can require an MFA or a change of password when these abnormalities are detected.

Once an attacker has penetrated an organization, she commonly lurks for months looking for opportunities, and she’s often not discovered until she’s already exploited what she’s fond. The average time before a breached user account is discovered is over 200 days (in many cases far more than 200). It is really scary to consider the damage that can be done and what can be stolen over that massive period of time. Using an identity-driven security approach with ATA, together with the reporting services provided by Azure AD, can help you detect and stop these attacks before they damage your business.

The sophistication of the security offered by Machine Learning simply cannot be overstated. Take a moment to look at the post New Levels of Security via Machine Learning & Combined Data Sets where I go into detail on the strength of our Machine Learning-based security solution. I’ve also recorded two separate podcasts on this topic (here and here), as well as written about how your network’s architecture can be made rock solid with a Machine Learning-based approach to security.

To learn more about Azure Machine Learning, check out these resources:

Next Steps:

 

In_The_Cloud_Logos

ScriptAnalyzer v1.2.0 Released

$
0
0
We are pleased to announce the release of v1.2.0 of ScriptAnalyzer - a significant release with many 
updates
to help and documentation content, fixes to the built-in default rule set based on community feedback,
support for consuming PowerShell content as streams, improvements in Custom Rule support, Engine error
handling and numerous community contributions.


You can grab this version from PowerShell Gallery @:
http://www.powershellgallery.com/packages/PSScriptAnalyzer/1.2.0


Detailed change log is here:

https://github.com/PowerShell/PSScriptAnalyzer/blob/master/CHANGELOG.MD


Features:
  • Support for consuming PowerShell content as streams (-ScriptDefinition)
  • ScriptAnalyzer accepts configuration (settings) in the form of a hashtable (-Settings), added sample Settings
  • Ability to run default ruleset along with custom ones in the same invocation (-IncludeDefaultRules)
  • Recurse Custom Rule Paths (-RecurseCustomRulePath)
  • Consistent Engine error handling when working with Settings, Default and Custom Rules
 

Rules:
  • Rule to detect the presence of default value for Mandatory parameters (AvoidDefaultValueForMandatoryParameter)
 
 
Fixes:
Engine:
  • Engine update to prevent script based injection attacks
  • CustomizedRulePath is now called CustomRulePath – Fixes to handle folder paths
  • Fixes for RecurseCustomRulePath functionality
  • Fix to binplace cmdlet help file as part of build process
  • ScriptAnalyzer Profile is now called Settings
  • Fix to emit filename in the diagnosticrecord when using Script based custom rules
  • Fix to prevent Engine from calling Update-Help for script based custom rules
  • Added additional pester tests to take care of test holes in Custom Rule feature
  • Post-build error handling improvements, fixed typos in the project
 Rules:
  • Fixed bug in Positional parameter rule to trigger only when used with >= 3 positional parameters
  • Updated keywords that trigger PSAvoidUsingPlainTextForPassword rule
  • Updated ProvideDefaultParameterValue rule to AvoidDefaultValueForMandatoryParameter rule
  • Deprecate Internal Url rule based on community feedback, identified additional rules to handle hardcoded paths etc
  • Added localhost exceptions for HardCodedComputerName Rule
  • Update to Credential based rules to validate the presence of CredentialAttribute and PSCredential type
 
 
Documentation:
  • Rule & Cmdlet documentation updates – Cmdlet help file addition

Please use our GitHub page to submit feedback and browse ongoing discussions.



Raghu Shantha
Senior Software Engineer
PowerShell ScriptAnalyer Team
https://github.com/PowerShell/PSScriptAnalyzer 

System Center Management Pack for Dynamics CRM 2016 is now available

$
0
0

announcement_5951A951

Just a quick note to let you know that the System Center Management Pack for Dynamics CRM 2016 is now released. 

      The Operations Manager management pack for CRM enables you to monitor Microsoft Dynamics CRM Server 2016 and Microsoft Dynamics CRM Server 2015 in Microsoft System Center Operations Manager. If you already have the Operations Manager management pack for CRM 2015 installed, the Operations Manager management pack for CRM 2016 will replace it. 

You can find all the details as well as a download link here: http://www.microsoft.com/en-in/download/details.aspx?id=50379

Suraj Suresh Guptha | Program Manager | Microsoft

Get the latest System Center news on Facebook and Twitter:

clip_image001 clip_image002

System Center All Up: http://blogs.technet.com/b/systemcenter/

Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm

Update 1512 now available in System Center Configuration Manager Technical Preview

$
0
0
A little more than a week ago, we made the latest version of System Center Configuration Manager generally available , and we are very excited that many of our customers already upgraded to it! To provide continuous innovation, we have already been working...(read more)

Mailbag – Holiday 2015 Edition

$
0
0

Season's Greetings from the AskPFEPlat PFEs!

A solid Mailbag today with items from Paul Bergson, Rick Bergman, Matthew Walker, Mike Kline and Mike Hildebrand. Also an AskPFEPlat alum – Mark Morowczynski – has started making Permanent Waves over in Azure AD land (did you catch the Rush reference I snuck in there? J

Let's roll…

First up, Paul B. brought this one to our attention and thought our readers would find this a welcome addition to our in-box NIC teaming:

 

Next, Rick Bergman offered up a reminder about the "Taste of Premier" video series and called-out the most recent video where Microsoft CEO Satya Nadella and GM of Microsoft's Cloud Platform Julia White discuss and demo cyber and operational security in the mobile-first, cloud-first world.

Future Post Teaser: Jessica Payne has a video out there, too, about our local admin password management tools and very soon, she will have a blog on our site to compliment the video:

 

Matthew Walker, one of our resident Hyper-V gurus sent over a post from Jake Oshins …

Windows Server 2016 Hyper-V will be supporting pass-through to more PCIE devices:

 

Mike Kline had a couple of GREAT points and an oldie but goodie about on-prem AD:

 

Finally, Hilde here with a few nuggets:

 

Cheers!

Azure AD Mailbag: Syncing with Azure AD Connect

$
0
0
Howdy folks, Happy Friday. Hopefully like most of us here in the Identity Division at Microsoft, you are finishing up and getting ready for a nice holiday break! Mark is back to finish out the year with another Azure AD Mailbag post. This time he's...(read more)

Top Support Solutions for System Center 2012 Operations Manager

$
0
0

Top Microsoft Support solutions for the most common issues experienced when you use System Center 2012 Operations Manager.

1. Solutions related to operations or infrastructure:

2. Solutions related to setup or deployment issues:

3. Solutions related to email notification issues:

4. Solutions related to agent connectivity issues:

5. Solutions related to agents in gray state:


Temporary Post Used For Theme Detection (dec088e0-eedc-4307-a6e8-e9cba6d4df37 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (4bb3057c-7015-4e81-a296-7162e9ac4c48 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Introducing the updated JEA Helper Tool

$
0
0

Hello Readers!

Just Enough Administration (JEA) is one of the very exciting security features coming with Windows Management Framework (WMF) 5.0, and that you can also find in Windows Server 2016 Technical Preview 4 (TP4).

For those of you not familiar with JEA’s features and benefits, you can have a look at the experience guide available here.

With earlier previews, we had made available the JEA Toolkit Helper, as a way to help start the experience of creating and testing the JEA “toolkits” – which defined what users can do in JEA sessions.

Today, we’re publishing a newer version of this tool, now called the “JEA Helper Tool 2.0”. The new name reflects the fact that JEA no longer deals with “toolkits”, but with “role capabilities” and “session configurations”, which are now built on top of the underlying PowerShell infrastructure.  The updated version number reflects how JEA has evolved in recent previews. In other words, you should not try to use the previous “JEA Toolkit Helper 1.0” with a recent version of JEA, as it does not understand the news concepts recently introduced.

image

To be fair, starting with JEA today and working with its new concepts should be easier now, and many of you may not even need to use the tool to get started – and that’s a good thing! That being said, there are still a few situations where the tool can help people new to JEA start their journey:

  • Discovering cmdlets and modules, to build a role capability through a graphical user interface. Role capabilities syntax tend to become more complex as you add parameters and validations for those parameters. The tool gives you a syntax to use – that you can copy/paste into your own role capabilities, or just leverage as a learning point
  • Getting visibility into how role capabilities map to sessions configurations on the design machine
  • Helping generate the “Security Descriptor Definition Language” (SDDL) syntax when you want to use Two-Factor Authentication
  • Understanding the different cmdlets offered in WMF to work with JEA (create role capabilities, register configuration sessions, etc) – the tool is written in PowerShell, so you can look at the script underneath, to understand how it creates, maps, tests, registers, unregisters sessions on the local machine

The tool also implements several best practices from the experience guide, as detailed in the last part of this blog post

We hope you find this tool useful, and look forward to the feedback. Happy holidays!

Download Location and Requirements

You can get the tool from this location

BC-DLButtonDark

Per the experience guide, JEA requires either:

  • an instance of Windows Server 2016 TP4 instance running
  • an instance of Windows Server 2012 or 2012 R2 with WMF 5.0 RTM

Features

The JEA Helper Tool 2.0 includes the following features:

List, edit and create for role capabilities on the local machine

This tab provides a simple and basic way to display/edit an existing Role Capability, or create/edit a new one. This provides access to other sections of a Role Capability (not just “VisibleCmdlets” and “VisibleFunctions”).

image

Design the “VisibleCmdlets” and “VisibleFunctions” sections

This includes graphically picking the cmdlets and modules, generating the list from the current PowerShell audit log, and/or adding Service Management Automation (SMA) Runbooks (this last piece of optional, and requires a SMA endpoint to be configured the scripts parameters)

image

image

You can then copy/paste in your own Role Capabilities (when editing them in the first tab for example)

Visualize, register, unregister mapping of Role Capabilities to Session Configurations

image

Features like “Resultant Set” are also available for a specific selected row (remember that the tool also uses the script window to display status and outputs)

image

You can also test the Session Configuration for a specific user (you will be prompted for credentials)

image

image

The tool will warn you if it cannot execute one of these tasks, for example if you’re trying to test a Session Configuration invoking Active Directory cmdlets and the Active Directory PowerShell module is not present locally.

As you can see in the bottom part of the screenshot, it is also possible to create new session configurations manually, or copying from an existing one

SDDL output generation

This tab helps to generate the “Security Descriptor Definition Language” (SDDL) format you can use to secure the JEA sessions. In particular, this may be interesting when you want to leverage two-factor authentication, which requires a custom SDDL.

clip_image001

Best practices included in the tool

The tool tries to help with a few pitfalls and best practices from the experience guide, including:

  • Syntax for parameters, ValidateSet and ValidatePattern in Role Capabilities (output window in the second tab)
  • Grouping rows together for a single cmdlet
  • Removing ValidateSet if ValidatePattern is also used
  • Assigning commands to the appropriate “VisibleCmdlets” or “VisibleFunctions” section, depending on their command type. It also warns when it cannot determine the actual command type (if you are adding commands from a module not present locally, for example)

Recovering from Failed Disks in Tiered Storage Spaces on Windows 10 via PowerShell

$
0
0

I have a Windows 10 workstation at home which has a tiered storage space with 2 SSD and 6 HDD.  I have a mirrored virtual disk using both tiers and I use this for regular storage, file sharing, and running my virtual machines.

A tiered storage space is one that mixes a "fast" set of SSD and a "slow" set of spinning disks to create a storage space that is both big _and_ fast.  If you are unfamiliar with Tiered Storage Spaces, here are a couple of great blog posts here and here on the topic.

I had originally created this storage space when I was running Windows 2012R2 with the GUI, but even though there is no GUI (and it’s not supported) the tiered storage space works like a champ on Windows 10.  While tiered storage spaces are not supported on Windows 10 (seriously, tiered spaces are intended for server only) all the commands below are the same across the client and server OS.

Due to some constraints during the initial build process ended up with a mismatched (size-wise) SSDs in the fast tier.  I found another of my “small” SSDs on sale and I wanted to recover my “big” SSD for repurposing.

The task in front of me was simple, swap the disks.  I was out of SATA ports on my system so I made the foolhardy move of simply unplugging the SSDtobeReplaced and plugging in the new SSD.  Naturally this caused my mirrored storage space to go into a “degraded” state.

  • Note: The MSFT documentation on storage spaces is _very_ clear.  DON’T DO THIS IF AT ALL POSSIBLE.  We really, really, really want to have the new disk in place before removing the old one.

As I started doing some research on how to fix my mess, I found some good documents on how to repair a tiered storage space using the GUI.  Only problem was that I had no GUI to do this as I am running a client OS now.  I could have rebuilt my machine as a server OS and fixed the space that way, but I thought I’d see what PowerShell could do.


It turns out that Windows 10 PowerShell includes all the storage space commands needed to create, modify, and manage a tiered storage space.  This was good news, except that there was little to no documentation on _how_ to actually do the repair.  Initially I tried adding the drive to the pool as normal with the “add-physicadisk” command.

 

PS C:\WINDOWS\system32> get-virtualdisk

 

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach    Size

------------ --------------------- ----------------- ------------ --------------    ----

NTFS         Mirror                Degraded          Warning      False          5.68 TB

 

PS C:\WINDOWS\system32> Get-PhysicalDisk

 

FriendlyName              SerialNumber    CanPool OperationalStatus  HealthStatus Usage            Size

------------              ------------    ------- -----------------  ------------ -----            ----

ATA Samsung SSD 840       S1DBNSAF925671Y True    OK                 Healthy      Auto-Select 232.89 GB

Disk1                     WD-WMAZA3795023 False   OK                 Healthy      Auto-Select   1.82 TB

Disk2                     WD-WCAZA5713932 False   OK                 Healthy      Auto-Select   1.82 TB

SSD3                                      False   Lost Communication Warning      Auto-Select  446.5 GB

Disk3                     WD-WCAVY5436882 False   OK                 Healthy      Auto-Select   1.82 TB

Disk4                     WD-WCAZA5724921 False   OK                 Healthy      Auto-Select   1.82 TB

Disk5                     WD-WCAVY5805593 False   OK                 Healthy      Auto-Select   1.82 TB

Disk6                     WD-WMAZA3791851 False   OK                 Healthy      Auto-Select   1.82 TB

SSD2                      S1DBNSCF810110W False   OK                 Healthy      Auto-Select 232.25 GB

Samsung SSD 840 EVO 250GB S1DBNSAF925533D False   OK                 Healthy      Auto-Select 232.89 GB

 

PS C:\WINDOWS\system32> $SSD3 = Get-PhysicalDisk -SerialNumber S1DBNSAF925671Y

PS C:\WINDOWS\system32> Add-PhysicalDisk -StoragePoolFriendlyName pool1 -Usage HotSpare -PhysicalDisks $SSD3

PS C:\WINDOWS\system32> Repair-VirtualDisk -FriendlyName ntfs

PS C:\WINDOWS\system32> Get-StorageJob

 

Name   ElapsedTime JobState  PercentComplete IsBackgroundTask

----   ----------- --------  --------------- ----------------

Repair 00:00:00    Completed 100             False

 

I thought that it was odd that the repair job finished so quickly.  Sure enough the virtual disk was still in degraded state.

 

PS C:\WINDOWS\system32> get-virtualdisk

 

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach    Size

------------ --------------------- ----------------- ------------ --------------    ----

NTFS         Mirror                Degraded          Warning      False          5.68 TB

 

I tried to think about what might have gone wrong with the repair and started looking at the details of the physical disk when I noticed that the media type of my SSD was listed as “unspecified”

PS C:\WINDOWS\system32> Get-PhysicalDisk -SerialNumber S1DBNSAF925671Y|fl

ObjectId                         : {1}\\NOTJJSB\root/Microsoft/Windows/Storage/Providers_v2\SPACES_PhysicalDisk.ObjectI

                                   d="{4f1a0953-5b0d-11e4-95d2-806e6f6e6963}:PD:{47abc017-9860-11e5-963d-00224d9ad22b}"

PassThroughClass                 :

PassThroughIds                   :

PassThroughNamespace             :

PassThroughServer                :

UniqueId                         : 0050430000000003

Description                      :

FriendlyName                     : ATA Samsung SSD 840

HealthStatus                     : Healthy

Manufacturer                     : ATA

Model                            : Samsung SSD 840

OperationalStatus                : OK

PhysicalLocation                 :

SerialNumber                     : S1DBNSAF925671Y

AllocatedSize                    : 268435456

BusType                          : RAID

CannotPoolReason                 : In a Pool

CanPool                          : False

DeviceId                         : 8

EnclosureNumber                  :

FirmwareVersion                  : EXT0

IsIndicationEnabled              :

IsPartial                        : False

LogicalSectorSize                : 512

MediaType                        : Unspecified

OtherCannotPoolReasonDescription :

PartNumber                       :

PhysicalSectorSize               : 512

Size                             : 249376538624

SlotNumber                       :

SoftwareVersion                  :

SpindleSpeed                     : Unknown

SupportedUsages                  : {Auto-Select, Manual-Select, Hot Spare, Retired...}

UniqueIdFormat                   : EUI64

Usage                            : Hot Spare

PSComputerName                   :

ClassName                        : MSFT_PhysicalDisk

 

This means that the storage pool had no idea this was an SSD (even though it was in the name) and so it didn’t know to use it to replace the failed drive in the SSD_Tier.  “Easy Peasy” I heard myself mutter and I manually set the media type to SSD and re-tried the repair job.

 

PS C:\WINDOWS\system32> Get-PhysicalDisk -SerialNumber S1DBNSAF925671Y|Set-PhysicalDisk -MediaType SSD

PS C:\WINDOWS\system32> Repair-VirtualDisk -FriendlyName ntfs

PS C:\WINDOWS\system32> Get-StorageJob

 

Name   ElapsedTime JobState  PercentComplete IsBackgroundTask

----   ----------- --------  --------------- ----------------

Repair 00:00:00    Completed 100             False

 

 

PS C:\WINDOWS\system32> get-virtualdisk

 

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach    Size

------------ --------------------- ----------------- ------------ --------------    ----

NTFS         Mirror                Degraded          Warning      False          5.68 TB

 

BLERG!!  Still no joy on the repair job.  I was scratching my head and searching around when I came across a blog post by MVP Charbel Nemnom about how to replace a faulty disk in a two-way tiered storage space.

http://charbelnemnom.com/2014/09/step-by-step-how-to-replace-faulty-disk-in-two-way-mirrored-storage-tiered-space-storagespaces-ws2012r2/

 

I looked at the commands Charbel used and noticed the only difference between what he had done and my failed attempt was I specified “hotspare” as my usage type.  I flipped the usage type to “autoselect” and re-ran the repair job.

 

PS C:\WINDOWS\system32> Get-PhysicalDisk -SerialNumber S1DBNSAF925671Y|Set-PhysicalDisk -Usage AutoSelect

PS C:\WINDOWS\system32> Repair-VirtualDisk -FriendlyName ntfs -AsJob

 

Id     Name            PSJobTypeName   State         HasMoreData     Location             Command

--     ----            -------------   -----         -----------     --------             -------

183    CimJob90        CimJob          Running       True            NOTJJSB              Repair-VirtualDisk -Fr...

 

And sure enough, it was WORKING! 

PS C:\WINDOWS\system32> Get-StorageJob

 

Name         ElapsedTime JobState PercentComplete IsBackgroundTask

----         ----------- -------- --------------- ----------------

Repair       00:10:48    Running  65              False

Regeneration 00:10:49    Running  66              True

 

After about 30 minutes of repair (boy are SSDs _fast_), I removed the “failed” SSD and had a healthy virtual disk again.  J

PS C:\WINDOWS\system32> $BadSSD = Get-PhysicalDisk -FriendlyName SSD3

PS C:\WINDOWS\system32> Remove-PhysicalDisk -StoragePoolFriendlyName pool1 -PhysicalDisks $BadSSD

 

I learned several lessons with this exercise. 

  1. Don’t make the choice of just unplugging an existing drive and swapping in a new one if possible. – The MSFT documents are very clear on this point.  Add the new drive first if possible

  2. If I don’t have an existing hot-spare disk when a drive actually does go bad, I have to add that disk as “autoselect”

  3. Try to read the official MSFT documentation better – the need to use “autoselect” is actually spelled out here.  J

 

Thanks for reading and enjoy tiered storage spaces!

 JJ Streicher-Bremer

 

 

Microsoft Azure Stack: Hardware requirements

$
0
0

Contributed by Jeffrey Snover, Technical Fellow, Microsoft Corporation

Before we break for the holidays here in Redmond, I wanted to share some information that will help you plan your Azure Stack Technical Preview deployments in the new year.

We’ve been working hard on our “Azure in your datacenter” vision since this year’s Ignite conference. At this time, we’re ready to share hardware requirements for Azure Stack Technical Preview. For those of you who learn best visually, check out my video below.

Our goal is to enable you to experience the Azure Stack Technical Preview in a single server, instantiated as a Proof-of-Concept (POC) environment. To ensure a good experience, I encourage you to consider the "Recommended" server configuration below.

Hardware requirements for Azure Stack Technical Preview (POC)

Note that these requirements only apply to the upcoming POC release, they may change for future releases.

Component

Minimum

Recommended

Compute: CPU

Dual-Socket: 12 Physical Cores

Dual-Socket: 16 Physical Cores

Compute: Memory

96 GB RAM

128 GB RAM

Compute: BIOS

Hyper-V Enabled (with SLAT support)

Hyper-V Enabled (with SLAT support)

Network: NIC

Windows Server 2012 R2 Certification required for NIC; no specialized features required

Windows Server 2012 R2 Certification required for NIC; no specialized features required

Disk drives: Operating System

1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)

1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)

Disk drives: General Azure Stack POC Data

4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD).

4 disks. Each disk provides a minimum of 250 GB of capacity.

HW logo certification

Certified for Windows Server 2012 R2

Storage considerations

Data disk drive configuration: All data drives must be of the same type (SAS or SATA) and capacity.  If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided)

HBA configuration options:
     1. (Preferred)Simple HBA
     2. RAID HBA – Adapter must be configured in “pass through” mode
     3. RAID HBA – Disks should be configured as Single-Disk, RAID-0

Supported bus and media type combinations

  •          SATA HDD
  •          SAS HDD
  •          RAID HDD
  •          RAID SSD (If the media type is unspecified/unknown*)
  •          SATA SSD + SATA HDD**
  •          SAS SSD + SAS HDD**

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers will mark both HDD and SSD as Unspecified. In that case, the SSD will be used as persistent storage instead of caching devices. Therefore, you can deploy the Microsoft Azure Stack POC on those SSDs.

** For tiered storage, you must have at least 3 HDDs.

Example HBAs: LSI 9207-8i, LSI-9300-8i, or LSI-9265-8i in pass-through mode

 

 

 

 

 

 

 

While the above configuration is generic enough that many servers should fit the description, we recommend a couple of SKUs: Dell R630 and the HPE DL 360 Gen 9. Both these SKUs have been in-market for some time.

Hope you all have a great holiday season and look around for those servers, so you have it ready to go when we release the Azure Stack preview. See you in the new year!

Users can't access the desktop and other resources through Quick Access in Windows 10

$
0
0

If you use copyprofile when customizing your Windows 10 profiles, you may encounter a scenario where pinned icons, such as Desktop under Quick Access for Windows 10 will not be accessible and users may encounter an issue similar to the following when attempting to access or save an item to that location.

“Location is not available. C:\Users\Administrator\Desktop is not accessible. Access is denied.”

Microsoft is aware of the issue and is investigating further. To work around this issue, or to fix the issue if user profiles are already deployed and experiencing this behavior, consider implementing any of the following following options depending on your deployment scenario and requirements.

1. Before the image is created- Unpin the "desktop" shortcut from Quick Access prior to sysprep/copyprofile. The "desktop" shortcut under This PC will not be available upon profile creation. All other customizations will be retained.

2. After the image is created and deployed to address new logons- After sysprep (e.g. while in OOBE or logged in), delete the following file from the default profile . This will remove any customizations made to the Quick Access list prior to sysprep/copyprofile.

a. %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

3. After the image is created and deployed to address existing logons- Delete the file per-user so it's regenerated the next time Explorer is opened (again, losing any customizations):

a. %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

4. After the image is created and deployed to address existing logons - Have the user unpin and re-pin the Desktop from Quick Access after logon.

For steps 2a and 3a, you can utilize group policy preferences to deploy this to users that might be already experiencing the issue in their environment.

2a: %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

image

3a: %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

image

Users can't access the desktop and other resources through Quick Access in Windows 10

$
0
0

If you use copyprofile when customizing your Windows 10 profiles, you may encounter a scenario where pinned icons, such as Desktop under Quick Access for Windows 10 will not be accessible and users may encounter an issue similar to the following when attempting to access or save an item to that location.

“Location is not available. C:\Users\Administrator\Desktop is not accessible. Access is denied.”

Microsoft is aware of the issue and is investigating further. To work around this issue, or to fix the issue if user profiles are already deployed and experiencing this behavior, consider implementing any of the following following options depending on your deployment scenario and requirements.

1. Before the image is created- Unpin the "desktop" shortcut from Quick Access prior to sysprep/copyprofile. The "desktop" shortcut under This PC will not be available upon profile creation. All other customizations will be retained.

2. After the image is created and deployed to address new logons- After sysprep (e.g. while in OOBE or logged in), delete the following file from the default profile . This will remove any customizations made to the Quick Access list prior to sysprep/copyprofile.

a. %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

3. After the image is created and deployed to address existing logons- Delete the file per-user so it's regenerated the next time Explorer is opened (again, losing any customizations):

a. %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

4. After the image is created and deployed to address existing logons - Have the user unpin and re-pin the Desktop from Quick Access after logon.

For steps 2a and 3a, you can utilize group policy preferences to deploy this to users that might be already experiencing the issue in their environment.

2a: %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

image

3a: %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

image

Keeping browsing experience in users’ hands

$
0
0

​In April last year we announced some changes to our criteria around Adware designed to ensure that users maintain control of their experience. These changes are described in our blog, Adware: a New Approach. Since then, we’ve taken policy and enforcement measures to address unwanted behaviors exhibited by advertising programs that take choice and control away from users.

Ad injection software has evolved, and is now using a variety of ‘man-in-the-middle’ (MiTM) techniques. Some of these techniques include injection by proxy, changing DNS settings, network layer manipulation and other methods. All of these techniques intercept communications between the Internet and the PC to inject advertisements and promotions into webpages from outside, without the control of the browser. Our intent is to keep the user in control of their browsing experience and these methods reduce that control.

There are many additional concerns with these techniques, some of these include:

  • MiTM techniques add security risk to customers by introducing another vector of attack to the system.
  • Most modern browsers have controls in them to notify the user when their browsing experience is going to change and confirm that this is what the user intends. However, many of these methods do not produce these warnings and reduce the choice and control of the user.
  • Also, many of these methods also alter advanced settings and controls that the majority of users will not be able to discover, change, or control.

To address these and to keep the intent of our policy, we’re updating our Adware objective criteria to require thatprograms that create advertisements in browsers must only use the browsers’ supported extensibility model for installation, execution, disabling, and removal.

The choice and control belong to the users, and we are determined to protect that.

We encourage developers in the ecosystem to comply with the new criteria, as we provide ample notification period for them to work with developers as they fix their programs to become compliant.  Programs that will fail to comply will be detected and removed.

Enforcement starts on March 31, 2016.


Adding Work Folders to Office locations

$
0
0

With the latest Windows 10 Nov release, Work Folders can be surfaced as a location where user can easily access the document under it.

This new feature is announced on this page: http://windows.microsoft.com/en-in/windows-10/work-folders-in-windows-10. In this blog post, I’d like to share some more details on the user experience, and how you can enable this on Windows 7 machines.

Background

Office team offered a way for storage provider to register the service with Microsoft Office, so that the location will automatically appear in Open and Save As user interface in Office 2013 applications. You can find the details here: https://www.microsoft.com/en-us/download/details.aspx?id=35474. The same extension is offered in Microsoft Office 2016, with a slight change in the provider registration GUID format, which I’ll explain in this blog post.

Work Folders in Windows 10 builds in the support for this, and allow users to easily add Work Folders to the locations, you can also enable this capability on Windows 7 clients with some prep work.

Add a place

On Windows 10 devices with Nov update, Work Folders by default registered with Office applications. You can open any office applications, and navigate to Add a Place in Open/Save As, and you will be able to see the following:

Click on the Work Folders item, it adds the Work Folders as one of the location links associated with the user account, and you only need to do this once on one of your clients.

Open/Save As

Once the location is added, when you open or save as any office applications, Work Folders will be shown like the following:

Click on the Work Folders item will directly goes to the Work Folders path on the client, makes the access much easier.

Known issues

Because of the location list is associated with the user account, and if you are using mixed office versions, you may see both Work Folders (Office 2013) and Work Folders (Office 2016). You can find more details on the known issues here: http://social.technet.microsoft.com/wiki/contents/articles/32881.troubleshooting-using-work-folders-as-a-place-in-microsoft-office.aspx

Windows 7

Work Folders for Windows 7 released a while back, and it doesn’t do registration with the Office applications. You can enable this by adding the following registry keys, and make sure to provide the correct path for the LocalFolderRoot:

(Notice below the regkey path for Office 2013 is the GUID without {}, and the regkey for Office 2016 is the GUID with {})

Registry key for Office 2013 plugin:

[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Common\Cloud Storage\F02A2795-066C-47BA-936F-1517BA462169]

"LocalFolderRoot"=

"DisplayName"="Work Folders (Office 2013)"

"Description"="Use Work Folders to make your work files available on all devices you use, even when offline."

"LearnMoreURL"="http://go.microsoft.com/fwlink/?LinkId=623357"

"ManageURL"="http://go.microsoft.com/fwlink/?LinkId=623357"

 [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Common\Cloud Storage\F02A2795-066C-47BA-936F-1517BA462169\Thumbnails]

"Url16x16"="http://i-technet.sec.s-msft.com/dynimg/IC826548.jpeg"

"Url20x20"="http://i-technet.sec.s-msft.com/dynimg/IC826549.jpeg"

"Url24x24"="http://i-technet.sec.s-msft.com/dynimg/IC826550.jpeg"

"Url32x32"="http://i-technet.sec.s-msft.com/dynimg/IC826551.jpeg"

"Url40x40"="http://i-technet.sec.s-msft.com/dynimg/IC826552.jpeg"

"Url48x48"="http://i-technet.sec.s-msft.com/dynimg/IC826553.jpeg"

 

Registry key for Office 2016 plugin:

[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Common\Cloud Storage\{F02A2795-066C-47BA-936F-1517BA462169}]

"LocalFolderRoot""=

"DisplayName"="Work Folders (Office 2016)"

"Description"="Use Work Folders to make your work files available on all devices you use, even when offline."

"LearnMoreURL"="http://go.microsoft.com/fwlink/?LinkId=623357"

"ManageURL"="http://go.microsoft.com/fwlink/?LinkId=623357"

 [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\Common\Cloud Storage\{F02A2795-066C-47BA-936F-1517BA462169}\Thumbnails]

"Url16x16"="http://i-technet.sec.s-msft.com/dynimg/IC826548.jpeg"

"Url20x20"="http://i-technet.sec.s-msft.com/dynimg/IC826549.jpeg"

"Url24x24"="http://i-technet.sec.s-msft.com/dynimg/IC826550.jpeg"

"Url32x32"="http://i-technet.sec.s-msft.com/dynimg/IC826551.jpeg"

"Url40x40"="http://i-technet.sec.s-msft.com/dynimg/IC826552.jpeg"

"Url48x48"=”http://i-technet.sec.s-msft.com/dynimg/IC826553.jpeg

Once the registry keys are added, Work Folders will be surfaced as a place can be added, and you can go through the actions described in “Add a Place” to add Work Folders in any Office applications.

Changes implemented by Essentials Role on Windows Server 2012 R2

$
0
0

[This post comes to us courtesy of Sandeep Biswas and Rituraj Choudhary from Global Business Support]

Today we will discuss about the changes made to the server when the Essentials Experience role is installed and configured on a Windows Server 2012 R2 machine in an existing Active Directory domain.

The Essentials role requires the following server roles and their dependent features to be installed:

1. .Net Framework 4.5 Features

2. BranchCache

3. Remote Server Administration Tools

4. Web Server (IIS)

5. Windows Process Activation Service

6. Windows Server Backup

Additionally, while configuring the server using the Configure Windows Server Essentials wizard, the following role is installed:

1. Active Directory Certificate Services

When the Essentials role is configured on a Server, it automates certain changes to the server. These are described below:

Active Directory Modifications

1. The Server’s machine account is added as a member of the following groups:

a. Pre-Windows 2000 Compatible Access: A backward compatibility group which allows read access to all users and groups in the domain

b. Cert Publishers: Members of this group are permitted to publish certificates to the directory

2. The following Managed Service Accounts are created:

a. MediaAdmin: Service account used by Windows Server Essentials Media Streaming Service during configuration

b. ServerAdmin: Service account used by Windows Server Essentials Management Service during configuration

3. The ServerAdmin account is added as a member of the Administrators, Domain Admins and the Enterprise Admins groups. The MediaAdmin account is added as a member of the Administrators group

4. The following Global Security Groups are created:

a. WseAltertAdministrators: Users with permissions to view alerts in the network

b. WseAllowAddInAccess: Users with permissions to access Windows Server Essentials Add-ins

c. WseAllowComputerAccess: Users with permissions to access computer remotely in Remote Web Access

d. WseAllowDashboardAccess: Users with permissions to access Dashboard remotely in Remote Web Access

e. WseAllowHomePageLinks: Users with permissions to access links gadget in Remote Web Access

f. WseAllowMediaAccess: Users with permissions to access the media library in Remote Web Access

g. WseAllowShareAccess: Users with permissions to access shared folders in Remote Web Access

h. WseInvisibleToDashboard: Domain users that are hidden from Windows Server Essentials Dashboard

i. WsemanagedGroups: Groups managed by Windows Server Essentials

j. WseRemoteAccessUsers: Users with permissions to use VPN to connect to the server network remotely

k. WseRemoteWebAccessUsers: Users with permissions to use Remote Web Access

5. The Domain Admins Security group is added as a member of all the Essentials’ specific Global Security groups except the following:

a. WseInvisibleToDashboard

b. WseInvisibleToDashboard

 

Windows Server Essentials services that are installed and configured

1. Windows Server Essentials Computer Backup Service: This service helps you to backup data from and restore data to a client computer

2. Windows Server Essentials Health Service: This service evaluates key health criteria and generates alert notifications when an important condition is met

3. Windows Server Essentials Management Service: This is the centralized management pivot for Windows Server Essentials Experience role. It manages system settings and backgrounds of Windows Server Essentials

4. Windows Server Essentials Media Streaming Service: This service provides media streaming from the server to the client computers

5. Windows Server Essentials Notification Service: This service manages the Notifications Provider Service for the Windows Server Essentials Experience role

6. Windows Server Essentials Provider Registry Service: This service registers and enables discoverability of server role services and providers on computers running Windows Server Essentials

7. Windows Server Essentials Storage Service: This service manages the storage of the server

 

Web sites that are added and configured to the Internet Information Services (IIS) Manager console

1. Default Web Site

    - Bin
    - CertEnroll
    - CertSrv
    - Connect
    - Customization
    - home
    - Remote
    - Resources
    - services

2. Mac Web Service

    - bin

3. WSS Certificate Web Service

    - Bin
    - download

 

Active Directory Certificate Services components that are configured

1. Certification Authority: Root CA is used to issue certificates to users, computers and services, and to manage their validity

2. CA Web Enrollment: Web enrollment allows users to connect to a CA by means of a Web browser in order to:

  • Request and review certificate requests
  • Retrieve certificate revocation lists (CRLs)
  • Perform smart card certificate enrollment

Note: When you attempt to deploy Windows Server Essentials Experience role on a workgroup box, the configuration will first ask you to bring up a new Active Directory domain and configure other roles and features that the role depends on. Once it completes successfully, the Essentials role configuration will begin. For more information, please refer to this blog.

Windows Management Framework (WMF) 5.0 currently removed from Download Center

$
0
0

We recently released Windows Management Framework (WMF) 5.0 RTM delivering many requested improvements and fixes, via the Microsoft Download Center as announced in a previous blog post. However, we have discovered a bug which resets the PowerShell module environment during installation. As this issue can have a serious impact on our customers, we are taking the action to stop delivery of WMF 5.0 RTM, and have removed the packages from the Download Center. Additionally, we will be unpublishing Azure DSC Extension Handler versions 2.11 and 2.12 as they automatically install WMF 5.0 RTM.

We will deliver a revised WMF package as soon as the issue can be isolated, corrected, and validated. This issue only impacts users that have installed WMF 5.0 RTM (KB3094174, KB3094175, KB3094176), and does not affect preview versions of WMF 5.0, including WMF 5.0 Production Preview. We will publish further updates regarding a new WMF package to this blog post.

Specifically, this bug resets the PSModulePath environment variable to default values, losing any changes made prior to the WMF 5.0 RTM installation. If you have already installed WMF 5.0 RTM, this issue has already affected you - we will provide guidance on moving forward to the new WMF packages as soon as we have validated a fix.

- Windows PowerShell Team

Windows Server 2012 R2 VPN Interoperability with Cisco ASA

$
0
0
The IKEv2 implementation difference between Windows RRAS Gateway and Cisco ASA results in the non-interoperability between the two VPN devices (documented in this VPN Interoperability guide ). This interoperability is affecting a number of customers as...(read more)

Top Support Solutions for System Center 2012 Configuration Manager

$
0
0

Top Microsoft Support solutions for the most common issues experienced when you use System Center 2012 Configuration Manager (updated quarterly).

Note: Some content is for earlier versions of Configuration Manager but is listed here because the solution provided remains applicable to the current version.

1. Solutions related to application deployment:

2. Solutions related to software update deployment issues:

3. Solutions related to operating system deployment issues:

4. Solutions related to client not installed, not active, not assigned, or missing actions:

5. Solutions related to task sequence issues in operating systems deployment:

Viewing all 2937 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>