Windows Server 2016Deep Dive - Netfox AG

Windows Server 2016Deep Dive - Netfox AG

Windows Server 2016 DeepRoj Dive Mircov Technology Solution Professional Microsoft | Virtualisierung Microsoft Deutschland GmbH [email protected] Agenda Hyper-V Allgemein Using Nano Server for Hyper-V Driving Operational Efficiencies

Security Improvements Isolation Improvements Availability Improvements Scalability Improvements Running Linux on Hyper-V Upgrading your virtual infrastructure Understanding Containers Resources Hyper-V Allgemein Looking back Windows Server 2008 Windows Server 2012 System Center 2008 System Center 2012 Windows Server 2012 R2 System Center 2012 R2 Microsoft Azure Introduced Industry-leading

Azure as virtualization scale and design point platform/ performance management 2015 Gartner x86 Virtualization Magic Thomas J. Bittman, Philip Dawson, Michael Warrilow, July Quadrant 14, 2015 Microsoft a leader five consecutive years Gartner positions Microsoft in the Leaders Quadrant in the Magic Quadrant for x86 Server Virtualization Infrastructure based on its completeness of vision and ability to execute in the market. The x86 server virtualization infrastructure market is defined by organizations that are looking for solutions to virtualize applications from their x86 server hardware or OSs, reducing underutilized server hardware and associated hardware costs, and increasing flexibility in delivering the server capacity that applications need. Microsoft is currently the only vendor to be positioned as a Leader in Gartners Magic Quadrants for Cloud Infrastructure as a Service, Server Virtualization, Application Platform as a Service and Cloud Storage Services, and we believe this validates Microsofts strategy to enable the power of choice as we deliver industry-leading infrastructure services, platform services and hybrid solutions. Download the report at no cost

http://www.gartner.com/technology/reprints.do?id=1-2JGMVZX&ct=15071 5&st=sb This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. All statements in this report attributable to Gartner represent Microsofts interpretation of data, research opinion or viewpoints published as part of a syndicated subscription service by Gartner, Inc., and have not been reviewed by Gartner. Each Gartner publication speaks as of its original publication date (and not as of the date of this presentation). The opinions expressed in Gartner publications are not representations of fact, and are subject to change without notice. https://customers.microsoft.com/Pages/CustomerStory.aspx?recid=20051 Case Study GAD http://www.appy-geek.com/Web/ArticleWeb.aspx? 9 Case Study GAD http://www.appy-geek.com/Web/ArticleWeb.aspx?

1 The story so far SCALE 64 vCPU per VM 1TB RAM per VM 4TB RAM per Host 320 LP per Host 64 TB VHDX 1024 VMs per Host vNUMA NETWORKING Integrated Net Virt Net Virt Gateway Extended Port ACLs vRSS Dynamic Teaming AGILITY Dynamic Memory Live Migration LM with

Compression LM over SMB Direct Storage LM Shared Nothing LM Cross-Version LM Hot Add/Resize VHDX Storage QoS Live VM Export HETEROGENEOU S Linux FreeBSD AVAILABILITY Host Clustering 64 Node Clusters Guest Clustering Shared VHDX Hyper-V Replica AND MORE Gen 2 VMs Enhanced Session Auto VM Activation

Built in. We want you to be at the center of application innovation How much remains un-virtualized? ST R I F DU O CL APPLICATIONS AND SERVICES F EF I CY N E CI

INFRASTRUCTURE EVOLUTION OF DATACENTER Why is the business using shadow IT? Why is my CIO looking at agile alternatives? Why is investment in apps growing so much faster than IT? Opportunity to rethink your datacenter: Think services, not servers Traditional datacenter Microsoft Azure datacenter Tight coupling between infrastructure and apps Loosely coupled apps and micro-services Expensive, vertically integrated hardware

Industry-standard hardware Silo-ed infrastructure and operations Service-focused DevOps Highly customized processes and configurations Standardized processes and configurations Looking ahead Windows Server 2008 System Center 2008 Windows Server 2012 System Center 2012 Windows Server 2012 R2 System Center 2012 R2 Microsoft Azure Introduced Industry-leading Azure as virtualization scale and

design point platform/ performance management Windows Server 2016 System Center 2016 Microsoft Azure Cloud-first innovation: Infrastructure and application platform Power of Azure with the control of the datacenter Introducing Microsoft Azure Stack Windows Server APP INNOVATION Linux Portal IaaS | PaaS services

Cloud-inspired infrastructure Hybrid Hyper-scale Enterprise-grade Windows Server Linux Portal IaaS | PaaS services Cloud infrastructure [powered by Windows Server, System Center, and Azure technologies ] Microsoft Azure Stack Datacenter Microsoft Azure Whats new in Technical Preview 4 Compute

Networking Storage Security Industry-standard servers Physical network Industry-standard disks TPM-enabled hardware Network controller now deployable in a high availability mode Enhancements to Storage Spaces Direct for increased efficiency Improved East-West load balancing

Storage Health Service with a single monitoring point per cluster Shielded VMs have increased robustness and availability for production environments Nested virtualization Improved PowerShell support for VM upgrade / versioning Node fairness for better resource utilization Shared VHDX Virtual Machine Multi-Queue to enable 10G+ performance

Improved container Storage QoS offers increased flexibility with maximum Just Enough Administration for domain controllers and for server maintenance roles Hyper-V on Nano Server Customer Voice Reboots impact my business Why do I have to reboot because of a patch to a component I never use? When a reboot is required, the systems need to be back in service ASAP Server images are too big Large images take a long time to install and configure Transferring images consumes too much network bandwidth Storing images requires too much disk space

Infrastructure requires too many resources Security Impact The story so far Roles and Features Windows / Windows NT Windows NT to Windows Server 2003 GUI Shell Full Server Server Core Windows Server 2008 and Windows Server 2008 R2

Minimal Server Interface Server Core Windows Server 2012 and Windows Server 2012 R2 Our Cloud Journey Azure Patches and reboots interrupt service delivery (VERY large # of servers) * (large OS resource consumption) => COGS Provisioning large host images competes for network resources Cloud Platform System (CPS) Cloud-in-box running Windows Server & System Center Excessive time required to fully deploy Patching impacts network allocation

Fully loaded CPS would live migrate > 16TB for every host OS patch Network capacity could have otherwise gone to business uses Costly reboots result in service disruption: Compute host ~2 minutes Storage host ~5 minutes The next step in the journey Nano Server: A new headless, 64- bit only, deployment option for Windows Server Deep refactoring with cloud emphasis Cloud fabric & infrastructure (clustering, storage, networking) Born-in-the-cloud applications (PaaS v2, ASP.NET v5) VMs & Containers (Hyper-V & Docker) Server Extend the Server Core pattern

Roles & features live outside of Nano Server No Binaries or metadata in OS image Nano Standalone packages install like apps Server Full driver support Antimalware Core Server with a Desktop Experienc e Servicing improvements* 30 Important bulletins

25 25 26 Critical bulletins 12 23 11 10 20 23 Reboots required 20 8 15

15 6 6 10 10 8 9 4 3 5 5 2 2 0 0 Nano Server

Full Server Server Core 0 Nano Server Full Server Server Core Nano Server Full Server * Analysis based on all patches released in 2014 Server Core Security improvements 120 Drivers loaded 45 Services running

35 46 Ports open 31 30 98 100 80 50 40 25 35 73 30

60 25 20 22 15 20 40 12 15 10 10 20 5 5 0

0 Nano Server Server Core 0 Nano Server Server Core Nano Server Server Core Resource utilization improvements 30 Process Count 300 26 25 Boot IO (MB)

160 255 250 140 Kernel Mem in Use (MB) 139 120 21 20 200 15 150 10 100 150

80 60 100 61 40 5 50 0 0 Nano Server Server Core 20 0 Nano Server Server Core Nano Server

Server Core Deployment improvements 350 Setup Time (Sec) 6 300 300 Disk Footprint GB) 4.84 5 250 7 VHD Size (GB)

6.3 6 5 4 200 4 3 150 3 2 100 50 2 1 40 1

0.29 0 0.31 0 Nano Server Server Core 0 Nano Server Server Core Nano Server Server Core Nano Server - Roles & Features Zero-footprint model Server Roles and Optional Features live outside of Nano Server Standalone packages that install like applications Key Roles & Features Hyper-V, Storage (SoFS), and Clustering

Core CLR, ASP.NET 5 & PaaS Full Windows Server driver support Antimalware Built-in System Center and App Insights agents to follow Getting started Nano Server is an installation option Like Server Core, but cannot be selected during Setup Must be customized with drivers Located on the Windows Server media Available within the Windows Server Technical Preview Nano Server Quick Start Scripts included in Nano Server folder to

make it easy to build a customized Nano Server image New-NanoServerImage.ps1 Convert-WindowsImage.ps1 Use scripts to generate a Nano Server image for Physical Machine Virtual Machine Nano Server Customizations Required Add right set of drivers for hardware or VM* Add required roles or features for servers role* Set Administrator password* Convert WIM to VHD* Optional

Set Computer Name* Run commands on first boot, e.g. set a static IP address Domain Join* Dual Boot Enabling Emergency Management Services (EMS)* Installing Agents and Tools * - supported by New-NanoServerImage.ps1 Remotely Managing Nano Server Remotely Managing Nano Server Remote Server Management Tools Core PowerShell & WMI

PowerShell DSC Hyper-V Manager Failover Cluster Manager Server Manager Perfmon, Event Viewer, Disk Manager, Device Driving Operational Efficiencies Production checkpoints Fully supported for production environments Full support for key workloads: Easily create point in time images of a virtual machine, which can be restored later on in a way that is completely supported for all production workloads. VSS: Volume Snapshot Service (VSS) is used inside Windows virtual machines to create the production checkpoint instead of using saved state technology. Familiar: No change to user experience for taking/restoring a checkpoint. Restoring a checkpoint is like restoring a clean backup of the server.

Linux: Linux virtual machines flush their file system buffers to create a file system consistent checkpoint. Production as default: New virtual machines Hyper-V Manager Improvements Multiple improvements to make it easier to remotely manage and troubleshoot Hyper-V Servers: Support for alternate credentials Connecting via IP address Connecting via WinRM ReFS Accelerated VHDX Operations Resilient File System It maximizes data availability, despite errors that would historically cause data loss or downtime. Taking advantage of an intelligent file system for: Instant fixed disk creation Instant disk merge operations VM Configuration Changes New virtual machine configuration file

Binary format for efficient performance at scale Resilient logging for changes New file extensions .VMCX and .VMRS Hypervisor Power Management Updated hypervisor power management model to support new modes of power management. Connected Standby works! Remote FX Improvements Support for OpenGL 4.4 and OpenCL 1.1 API Larger dedicated VRAM and configurable VRAM Support for Generation 2 virtual machines Storage Improvements Storage Spaces in 2012 R2 Spotlight capabilities HYPER-V CLUSTER CONNECTED OVER SMB3

Virtualize storage: Transform low-cost, high volume hardware with Storage Spaces. High performance: Combine HDDs and SSDs in tiered Storage Spaces to meet demand from intensive workloads. Reduce costs: Eliminate complex and costly storage area network (SAN) infrastructure without sacrificing SAN-like capabilities. Flexible: Independently scale capacity and compute to grow with your business demands. Resilient: Multiple layers of redundancy across disk, enclosure, connectivity and file server nodes ensure highest availability. Todays solution with Windows Server 2012 R2 and System Center 2012 R2 Scale-out file server Industry-standard x86 servers and SAS connectivity

Tiered Storage Spaces SSD SSD SSD Industry-standard JBOD Storage Spaces reliability Mirror resiliency 2-copy mirror, tolerates one drive failure 3-copy mirror, tolerates two drive failures Suitable for random I/O Mirror Space Parity resiliency Mirror Space Mirror Space Parity Space

Lower cost storage using LRC encoding Tolerates up to 2 drives failures Suitable for large sequential I/O Enclosure awareness Tolerance for entire drive enclosure failure Parallel rebuild Drive Failure Pseudo-random distribution weighted to favor less used disks Reconstructed space is spread widely and rebuilt in parallel Todays solution with Windows Server 2012 R2 and System Center 2012 R2 Data Copy 1 Data Copy 2 Storage Spaces Direct Software defined storage for private cloud using industry standard servers with

local storage Cloud design points and management Standard servers with local storage New device types such as SATA and NVMe SSD Prescriptive hardware configurations Deploy/manage/monitor with SCVMM, SCOM & PowerShell Reliability, scalability, flexibility Fault tolerance to disk, enclosure, node failures Scale pools to large number of drives Simple and fine grained expansion Fast VM creation and efficient VM snapshots Use cases

Hyper-V IaaS storage Storage for backup and replication targets Hyper-converged (compute and storage together) Converged (compute and storage separate) HYPER-V CLUSTER(S) SMB3 STORAGE NETWORK FABRIC SCALE-OUT FILE SERVER CLUSTER Storage Spaces Direct Your Converged (Disaggregated) Hyper-converged Choice Compute and Storage resources together Compute and Storage scale and are managed together Typically small to medium sized scale-out deployments Compute and Storage resources separate Compute and Storage scale and are managed independently

Typically larger scale-out deployments HYPER-V CLUSTER HYPER-V CLUSTER(S) SMB3 STORAGE NETWORK FABRIC SCALE-OUT FILE SERVER CLUSTER Storage Spaces Direct - Partners Cisco UCS C3160 Rack Server Dell PowerEdge R730xd Fujitsu Primergy RX2540 M1 HP Apollo 2000 System Intel Server Board S2600WT-Based Systems Lenovo System x3650 M5 Quanta D51PH Storage Replica - Synchronous and asynchronous storage

replication independent of the underlying hardware 1 Stretch Cluster NODE1 in HVCLUS Manhattan DC stretch clusters across sites for HA 2. BCDR scenario with MASR - Synchronous or asynchronous cluster-to-cluster/server-toserver data mirrors with Microsoft Azure Site Recovery orchestration SR over SMB3 NODE4 in HVCLUS NODE2 in HVCLUS 2 Block-level, host-based, synchronous volume Cluster to Cluster NODE2 in FSCLUS NODE1 in DRCLUS

NODE2 in DRCLUS NODE4 in DRCLUS NODE4 in DRCLUS Jersey City DC NODE1 in FSCLUS Manhattan DC replication Works with any Windows volume, uses SMB3 transport End-to-end software stack from Microsoft Totally hardware agnostic; even your existing SANs work Management through Failover Cluster Manager UI, ASR UI, CLI NODE3 in HVCLUS Jersey City DC 1. Stretch cluster scenario - Synchronous SR over SMB3

NODE3 in FSCLUS NODE4 in FSCLUS Driver layering Monaco Site Source SR Server Nice Site Destination SR Server SMB Server File System Filters NTFS/REFS VolSnap Filter BitLocker Filter Volume Manager SR SR Partition Manager Partition Manager Disk Driver

Disk Driver Disk Disk Example: Synchronous workflow Applications (local or remote) 1 5 2 Source Server Node (SR) t Data Requirements: Max. 5ms 4 2 Log

Destination Server Node (SR) t1 Data 3 Log Security Improvements Evolving security threats Rising number of organizations suffer from breaches 1 2 3 Increasing incidents Bigger motivation s

Bigger risk Cyberattacks on the rise against US corporations 1 New York Times [2014] How hackers allegedly stole unlimited amounts of cash from banks in just a few hours Ars Technica [2014] 2 Espionage malware infects rafts of governments, industries around the world

Ars Technica [2014] The biggest cyberthreat to companies could come from the inside Cnet [2015] 3 Cybercrime costs US economy up to $140 billion annually, report says 1 Los Angeles Times [2014] Malware burrows deep into computer BIOS to

escape AV The Register [September 2014] 3 2 Forget carjacking, soon it will be carhacking The Sydney Morning Herald [2014] 3 Central risk: Administrator privileges Stolen admin Phishing Insider attacks credentials

attacks each of these attacks seeks out & exploits privileged accounts. 1. We know that administrators have the keys to the kingdom; we gave them those keys decades ago 2. But those administrators privileges are being compromised through social engineering, bribery, coercion, private initiatives Conclusion: change the way we think about security We have to assume breach not a position of pessimism, one of security rigor Problem A breach will (already did?) happen Lacking the security-analysis manpower Cant determine the impact of the breach Unable to adequately respond to the breach New approach (in addition to

prevention) Limit or block the breach from spreading Detect the breach Respond to the breach Protect virtual machines Challenges in protecting high value virtual machines Any seized or infected host administrators can access guest virtual machines Customer Host OS Impossible to identify legitimate hosts without a hardware based verification Tenants VMs are exposed to storage and network attacks while unencrypted Guest VM Customer

Guest VM Storage Legitimat e host? Hypervisor Hypervis or Fabric Fabric Protect virtual machines Microsofts approach Hardware-rooted Any seized or infected host administrators can access technologies guest to separate virtualthe

machines guest operating system from host administrators Impossible Guarded fabric to identify to identify legitimate hosts and without certify a hardware them to run based shielded verification tenant VMs Virtual Secure Mode Virtualized Tenants VMs

trusted are exposed platform to storage (vTPM) module and network support attacks to while unencrypted encrypt virtual machines Shielded VM Process and Memory access protection from the host Customer Customer Guest VM Guest VM

Host OS Host Guardian Service Enabler to run Shielded Virtual Machines on a legitimate host in the fabric Bitlocker enabled VM Trust the host Storage Hypervisor Hypervis or Fabric Fabric Host Guardian

Service So what is a Shielded VM? The data and state of a shielded VM are protected against inspection, theft and tampering from both malware and datacenter administrators1. 1 fabric admins, storage admins, server admins, network admins Protect virtual machines How it works with Windows Server and System Center Shielded VM Manage encrypted VM VM provisioning Trusted administrator Customer Host OS Guest VM Guest VM

Enable BitLocker Host Guardian Service SCVMM Manage Legitimate Hosts Host verificatio n Storage Host management Hypervisor Fabric Logo certified server hardware (UEFI, TPM v2.0, Virtualization, IOMMU) Virtual Secure Mode to protect

OS secrets Secure and measured boot Attestation information Certificate Encrypted key + Certificate Key vTPM key management Key management service for VM TPMs Protect virtual machines Shielded Virtual Machines Shielded Virtual Machines can only run in fabrics that are designated as owners of that virtual machine Shielded Virtual Machines will need to be encrypted (by BitLocker or other means) in order to ensure that only the designated owners can run this virtual

machine You can convert a running virtual machine into a Shielded Virtual Machine Host Guardian Service Shielded Virtual Machines Shielded Virtual Machines HOST without TPM (generic host) Virtual hard disk Storage Shielded Virtual Machines HOST with TPM

Virtual hard disk Virtual hard disk Shielded VMs: Security Assurance Goals & data at-rest/in-flight protection Encryption Virtual TPM enables the use of disk encryption within a VM (e.g. BitLocker) Both Live Migration and VM-state are encrypted Admin-lockout Host administrators cannot access guest VM secrets (e.g. cant see disks or video) Host administrators cannot run arbitrary kernel-mode code Attestation of health VM-workloads can only run on healthy hosts Attestation Modes: mutually exclusive H/W-trusted Admin-trusted attestation (TPM-based) More complex

setup/configuration (Active Directory-based) Simplified deployment and configuration Setup an Active Directory trust + register group Authorize a Hyper-V host to run shielded VMs by adding it to the Active Directory group Register each Hyper-V hosts TPM (EKpub) with the HGS Establish baseline CI policy for each different H/ W SKU Deploy HSM and use HSM-backed certificates Existing H/W likely to meet requirements Scenarios enabled New Hyper-V host hardware required Weaker levels of assurance Needs to support TPM v2.0 and UEFI 2.3.1

Highest levels of assurance Typical Service Providers Trust rooted in for hardware Compliance with code-integrity policy required for key-release (attestation) Data-protection at rest and on-the-wire Secure DR to a hoster (VM already shielded) Fabric-admin is trusted No hardware-rooted trust or measured-boot No enforced code-integrity Typical for Enterprises Attestation Workflow (hardwaretrusted) 1 Start Shielded VM 2 Attestation Client initiates Attestation Protocol

REST API Guarded Host 3 Host sends boot & CI measurements Attestation Protocol 5 Issues signed Attestation Certificate encrypted to host 4 Validates host measurements Attestation Service (IIS WebApp) Key Protection

Service (IIS WebApp) Host Guardian Service node Attestation Workflow (admintrusted) 1 Start Shielded VM 2 Attestation Client initiates Attestation Protocol REST API Guarded Host 3 Host presents Kerberos service ticket Attestation Protocol

5 Issues signed Attestation Certificate encrypted to host 4 Validates group membership Attestation Service (IIS WebApp) Key Protection Service (IIS WebApp) Host Guardian Service node Protect virtual machines Virtual Secure Mode Virtual Secure Mode enabled virtual machines prevents infected hosts accessing physical

memory data, physical processor. Virtual Secure Mode introduces the concept of Virtual Trust Levels, which consist Memory Access Protections, Virtual Processor State and Interrupt Subsystem Hypervisor Virtual Machine Virtual Machine CPU Memory Virtual Trust Levels (VTLs): Security mechanism on top of existing privilege enforcement (ring 0/ring 3) Memory Access Protections: A VTLs memory access protections can only be changed by software running at a higher VTL Virtual Processor State: Isolation of processor state between VTLs Interrupt Subsystem: Interrupts to be managed securely at a particular VTL without risk of a lower VTL generating unexpected interrupts or masking interrupts HOST

Protect virtual machines Host Guardian Service Host Guardian Service holds keys of the legitimate fabrics as well as encrypted virtual machines Host Guardian Service runs as a service to verify if it is a trusted machine Host Guardian Service Verification Host attestation vTPM key management Host Guardian Service can live anywhere even as a virtual machine Hyper-V based code integrity Customer

FABRIC GUARDIAN Microsoft Service provider Shielded VMs HOSTS Linux Secure Boot Providing kernel code integrity protections for Linux guest operating systems Works with: Ubuntu 14.04 and later SUSE Linux Enterprise Server 12 PowerShell to enable: Set-VMFirmware Ubuntu -SecureBootTemplate MicrosoftUEFICertificateAuthority

Isolation Improvements Storage Quality of Service (QoS) Control and monitor storage performance Simple out of box behavior VIRTUAL MACHINES Enabled by default for Scale Out File Server Automatic metrics per VHD, VM, Host, Volume Includes normalized IOPs and latency HYPER-V CLUSTER Flexible and customizable policies Policy per VHD, VM, service, or tenant Define minimum and maximum IOPs Fair distribution within policy Management System Center VMM and Ops Manager PowerShell built-in for Hyper-V and SoFS Rate

limiters Rate limiters Rate limiters Rate limiters SCALE OUT FILE SERVER CLUSTER Policy Manager I/O sched I/O sched I/O sched Storage Quality of Service (QoS) Building Blocks

1 Profiler and Rate Limiter on Hyper-V compute nodes 2 I/O Scheduler distributed across the storage nodes 3 Centralized Policy Manager on Scale-Out File Server Cluster VIRTUAL MACHINES HYPER-V CLUSTER Rate limiters 1 Rate

limiters Rate limiters SCALE OUT FILE SERVER CLUSTER Policy Manager 3 Rate limiters 2 I/O sched I/O sched I/O sched Responding to changing demand The Policy Process

VIRTUAL MACHINES HYPER-V CLUSTER Rate limiters 4 Rate limiters SCALE OUT FILE SERVER CLUSTER Policy Manager 3 Rate limiters Rate limiters 1 2

I/O sched I/O sched I/O sched Storage QoS Policies Understanding Policies Policies Define on Scale-Out File Server Apply to Hyper-V virtual disk The rest is automatic VIRTUAL MACHINES Sample Policy HYPER-V CLUSTER Name SilverVM

PolicyID 8d730190-518f-4087-93623971255acf36 Rate limiters Rate limiters Rate limiters Rate limiters MinimumIOPs 100 MaximumIOP s 200 Type Multi-Instance

SCALE OUT FILE SERVER CLUSTER Silve r Polic y Gold Polic y Policy Manager I/O sched I/O sched I/O sched Types of Storage QoS Policies Single-Instance Multi-Instance

Resource distributed among VMs Ideal for representing a clustered workload, application, or tenant All VMs perform the same Ideal for creating per-VM performance tiers 180 180 160 140 100 200 140 120 100 VM2 VM1

80 40 200 160 120 60 MaximumIOPs = 200 200 MaximumIOPs = 200 200 100 20 0

IOPs 100 IOPs 80 60 40 20 0 IOPS Policies with PowerShell # Deployment - Create policy (on File Server) New-StorageQosPolicy CimSession FS -Name SilverVM -PolicyType MultiInstance -MaximumIops 200 # Deployment - Assign policy to VMs (on Hyper-V Host) $Policy = Get-StorageQosPolicy CimSession FS -Name SilverVM Get-VM -Name VMName* | Get-VMHardDiskDrive | Set-VMHardDiskDrive QoSPolicy $Policy # Monitoring - Retrieve all flows (on File Server) Get-StorageQosFlow # Monitoring - Retrieve flows using the policy (on File Server) Get-StorageQosPolicy -Name SilverVM | Get-StorageQosFlow Host Resource Protection

Dynamically identify virtual machines that are not playing well and reduce their resource allocation Pioneered in Azure and enabled by default Designed to help prevent a VM consuming excessive hardware resources Looks for patterns of activity that shouldnt occur within a non-malicious VM Availability Improvements Failover clustering Integrated solution, enhanced in Windows Server Technical Preview VM compute resiliency: Provides resiliency to transient failures such as a temporary network outage, or a non-responding node. Hyper-V Cluster In the event of node isolation, VMs will continue to run, even if a node falls out of cluster

membership. This is configurable based on your requirements default set to 4 minutes. VM storage resiliency: Preserves tenant virtual machine session state in the event of transient storage disruption. VM stack is quickly and intelligently notified on failure of the underlying block or file based storage infrastructure. VM is quickly moved to a PausedCritical state. VM waits for storage to recover and session state Shared storage Failover clustering Integrated solution, enhanced in Windows Server Technical Preview Node quarantine: Unhealthy nodes are quarantined and are no longer allowed to join the cluster. Hyper-V Cluster This capability prevents unhealthy nodes from negatively affecting other nodes and the overall cluster.

Node is quarantined if it unexpectedly leaves the cluster three times within an hour. Once a node is placed in quarantine, VMs are live migrated from the cluster node, without downtime to the VM. Shared storage Guest clustering with Shared VHDX Flexible and secure: Shared VHDX Not bound to underlying storage topology removes need to present the physical Guest cluster underlying storage to a guest OS. *NEW* Shared VHDX supports online resize. Streamlined VM shared storage: Shared VHDX files can be presented to

multiple VMs simultaneously, as shared storage. Guest cluster Hyper-V host clusters The VM sees shared virtual SAS disk that it can use for clustering at the guest OS and application level. Utilizes SCSI-persistent reservations. Shared VHDX can reside on a Cluster Shared Volume (CSV) on block storage, or on SMB file-based storage. *NEW* Shared VHDX supports Hyper-V Replica and host-level backup. Shared VHDX files CSV on block storage SMB Share

file-based storage Shared VHDX files Hyper-V Replica Integrated software-based VM replication VM replication capabilities built into Windows Server 2012 R2 Hyper-V. Configurable replication frequencies of 30 seconds, 5 minutes and 15 minutes. Secure replication across network, by using certificates. Once Once Upon replicated, Hyper-V site failure, Replica changes VMs is enabled,

can replicated be started VMs on chosen begin on secondary replication frequency site Primary site Initial replica Replicated changes Flexible solution, agnostic of network, server and storage hardware on either site. No need for other virtual machine replication technologies, reducing costs. Automatic handling of live migration. Simple configuration and management either through Hyper-V Manager, PowerShell, or with Azure Site Recovery.

CSV on block storage SMB Share file-based storage Secondary site Replica Support for Hot-Add VHDX When you add a new virtual hard disk to a virtual machine that is being replicated it is automatically added to the not-replicated set. This set can be updated online. Set-VMReplication "VMName" -ReplicatedDisks (Get-VMHardDiskDrive "VMName") Memory management Complete flexibility for optimal host utilization

Static Memory: Startup RAM represents memory that will be allocated regardless of VM memory demand. *NEW* Runtime resize: Administrators can now increase, or decrease VM memory without VM downtime. Cannot be decreased lower than current demand, or increased higher than physical system memory. Dynamic Memory: Enables automatic reallocation of memory between running VMs. Results in increased utilization of resources, improved consolidation ratios and reliability for restart operations. Runtime resize: With Dynamic Memory Virtualization and networking Virtual network adaptor enhancements *NEW* Administrators now have the ability to add or remove virtual NICs (vNICs) from a VM without downtime. Enabled by default, with Gen 2 VMs only. vNICs can be added using Hyper-V Manager GUI or PowerShell. Full support: Any supported Windows or

Linux guest operating system can use the hot-add/remove vNIC functionality. vNIC identification: New capability to name vNIC in VM settings and see name inside guest operating system. Add-VMNetworkAdapter -VMName TestVM SwitchName Virtual Switch -Name TestNIC -Passthru | Set-VMNetworkAdapter -DeviceNaming on Operational Improvements Evolving Hyper-V Backup New architecture to improve reliability, scale and performance. Decoupling backing up virtual machines from backing up the underlying storage. No longer dependent on hardware snapshots for core backup functionality, but still able to take advantage of hardware capabilities when they are present. Built in change tracking for Backup Most Hyper-V backup solutions today implement kernel level file system filters in order to gain efficiency.

Makes it hard for backup partners to update to newer versions of Windows Increases the complexity of Hyper-V deployments Efficient change tracking for backup is now part of the platform VM Configuration Changes New virtual machine configuration file Binary format for efficient performance at scale Resilient logging for changes New file extensions .VMCX and .VMRS Running Linux on Hyper-V Heterogeneous integration Deploy and manage Linux and FreeBSD as first-class citizens Broad support: Run Red Hat, SUSE, OpenSUSE, CentOS, Ubuntu, Debian, Oracle Linux, and FreeBSD workloads with full support. Increased utilization: Run Windows, Linux, and FreeBSD side-by-side, driving up utilization and reducing hardware costs.

Simplified management: Single experience for managing, monitoring, and operating the infrastructure. Reduces personnel and operational costs. Working together: Microsoft works as part of the Linux community to deliver interoperability in the datacenter. Microsoft also works closely with partners such as Red Hat, Oracle, Canonical, OpenLogic, SUSE, NetApp, and Citrix to ensure an optimal Linux and FreeBSD experience. Heterogeneous integration Deploy and manage Linux and FreeBSD as first-class citizens Optimized Hyper-V drivers: Linux Integration Services (LIS) or FreeBSD Integration Services (BIS) provide optimal experience and significant performance improvements. Legacy Linux releases: Microsoft provides an ISO containing installable LIS drivers. Legacy FreeBSD releases: Microsoft provides ports that contain the installable BIS drivers and corresponding daemons for

FreeBSD releases before 10.0. New releases: LIS is built-in to the Linux OS and BIS is built-in to the FreeBSD operating system. No separate downloads or installations required except for a KVP ports download that is needed for FreeBSD 10.0. Heterogeneous integration Deploy and manage Linux and FreeBSD as first-class citizens Compute: Up to 64 vCPUs per VM, full Dynamic Memory support including online resize. Support for configuration of MMIO gap. Storage: Hot-add and online resize of storage, Virtual Fibre Channel, and TRIM support. Networking: vRSS, TSO, Checksum Offload and Jumbo Frames support along with hot-add of vNICs. Backup: Zero downtime backup of live VMs. Security: Secure Boot ensures Linux guest OS components are verified using signatures present in the UEFI data store.

Currently supports Ubuntu 14.04 and later, and SUSE Linux Enterprise Server Configurati on Store Worker Processes Applications Applications Enlightened Mode Optimized Performance Optimized Synthetic Devices Enlightened Mode Optimized Performance Optimized Synthetic Devices WMI Provider Management Service

Windows Kernel Virtual Service Provider Independent Hardware Vendor Drivers Virtualization Service Client Hyper-V Server Hardware Virtualizatio n Service Client Cluster OS rolling upgrades Upgrade cluster nodes without downtime to key workloads Streamlined upgrades: Upgrade the OS of the

cluster nodes from Windows Server 2012 R2 to Windows Server Technical Preview without stopping the Hyper-V or the SOFS workloads. Hyper-V Cluster Infrastructure can keep pace with innovation, without impacting running workloads. Phased upgrade approach: 1. A cluster node is paused and drained of workloads by using available migration capabilities. 2. The node is evicted, and the operating system OS is replaced with clean install of Windows Server Technical Preview. 3. The new node is added back into active cluster. The cluster is now in mixed-mode. This process is repeated for other nodes. The cluster functional level stays at Windows Server 2012 R2 until all nodes have been upgraded. Upon completion, the administrator executes: UpdateClusterFunctionalLevel Shared storage Windows Server 2012 R2 Cluster Nodes 1

0 3 2 Updated Windows Server Cluster Nodes 1 0 2 3 Virtual machine upgrades New virtual machine upgrade and servicing processes Compatibility mode: When a VM is migrated to a Windows Server Technical Preview host, it will remain in Windows Server 2012 R2 compatibility mode. Windows By running Server Update-VMVersion Technical Preview , VM will be

supports upgraded previous to newest version hardware VMs version and caninuse compatibility the new Hyper-V mode features Upgrading a VM is separate from upgrading host. v6 v6 v6 v6 VMs can be moved back to earlier versions until they have been manually upgraded. Update-VMVersion vmname

Once upgraded, VMs can take advantage of new features of the underlying Hyper-V host. Servicing model: VM drivers (integration services) updated as necessary. Updated VM drivers will be pushed directly to guest operating system via Windows Update. Windows Server 2012 R2 Hyper-V Windows Server Technical Preview Hyper-V Understanding Containers Containers A new approach to build, ship, deploy, and instantiate applications } Physical

Applications traditionally built and deployed onto physical systems with 1:1 relationship New applications often required new physical systems for isolation of resources } Virtual Higher consolidation ratios and better utilization Faster app deployment than in a traditional, physical environment Apps deployed into VMs with high compatibility success Apps benefited from key VM features i.e. Live migration, HA } Package and run apps within

Containers Physical/Virtual Key Benefits Further accelerate of app deployment Reduce effort to deploy apps Streamline development and testing Lower costs associated with app deployment Increase server consolidation Why Containers? Applications are fueling innovation in todays cloudmobile world Developers Operations Containers unlock ultimate productivity and freedom Enable write-once, run-anywhere apps Can be deployed as multi-tier distributed apps in IaaS/PaaS models Containers offers powerful

abstraction for microservices Enhances familiar IT deployment models Provide standardized environments for development, QA, and production teams Abstract differences in OS distributions and underlying infrastructure Higher utilization and compute density DevOps Integrate people, process, and tools for an optimized app development process Operations focus on standardized infrastructure Developers focus on building, deploying and testing apps } Containers Isolated runtime environment for hosted applications

Container Dependencies: Every application has its own dependencies which includes both software (services, libraries) and hardware (CPU, memory, storage). Virtualization: Container engine is a light weight virtualization mechanism which isolates these dependencies per each application by packaging them into virtual containers. Shared host OS: Container runs as an isolated process in user space on the host OS, sharing the kernel with other containers. Flexible: Differences in underlying OS and infrastructure are abstracted away, streamlining deploy anywhere approach. Fast: Containers can be created almost instantly, enabling rapid scale-up and scale-down in response to changes in demand. App A Bins/Libraries App B Bins/Libraries

Container Management Stack Host OS w/ Container Support Server Containers Dependencies: Each virtualized app includes the app itself, required binaries and libraries and a guest OS, which may consist of multiple GB of data. Independent OS: Each VM can have a different OS from other VMs, along with a different OS to the host itself. App B Flexible: VMs can be migrated to other hosts to balance resource usage and for host maintenance, without downtime. Secure: High levels of resource and security isolation for key virtualized workloads. }

How do they differ from virtual machines? Virtual Machine App A Bins/Libraries Bins/Libraries Guest OS Guest OS Hypervisor Server Container Ecosystem Container RunTime Container Images Image Repository

Applications Linux Application Frameworks Container Run-time Operating System Image Physical Host Container Run-time Host Operating System Operating System Operating System

Image Virtual machine(s) Hardware Virtualization Container Run-time Containers Host Operating System Container Run-time Containers Guest Operating System Host Operating System Hardware Virtualization Image Virtual machine(s) Microsofts Container runtimes Windows Server Container

HOSTING HIGHLY AUTOMATED SCALABLE AND ELASTIC SECURE EFFICIENT TRUSTED MULTI-TENANCY Hyper-V Container SHARED HOSTING REGULATED WORKLOADS HIGHLY AUTOMATED SECURE

SCALABLE AND ELASTIC EFFICIENT PUBLIC MULTI-TENANCY Modern App Dev, Flexible Isolation Container Run-Times Docker Hyper-V Container PowerShell Applicatio n Framewor k Windows Container Images

Others Windows Server Container Write once, deploy anywhere Container Management The right tools for you. PowerShe ll Docker Others Development Frameworks and Languages PHP Python Eclipse Go

Node .Net Win32 Ruby Perl C++ Java JavaScript Integrating with Docker. Docker integration Joint strategic investments to drive containers forward Customer Datacenter Docker: An open source engine that automates the deployment of any application as a portable, self-sufficient container that

can run almost anywhere. Partnership: Enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. Dockerized app Run anywhere Windows Server Container Docker } Strategic investments Linux Container Investments in Windows Server 2016

Open source development of the Docker Engine for Windows Server Microsoft Azure Service Provider Azure support for the Docker Open Orchestration APIs Federation of Docker Hub images into the Azure Gallery and Portal Docker integration Joint strategic investments to drive containers forward Docker Hub: Huge collection of open and curated applications available for download. https://hub.docker.com Collaboration: Bring Windows Server containers to the Docker ecosystem to expand the reach of both developer communities. Docker Engine: Docker Engine for Windows Server containers will be

developed under the Docker open source project. Docker client: Windows customers will be able to use the same standard Docker client and interface on multiple development environments. Docker.exe Examples: docker run docker images Docker Client Windows Server Linux Docker Remote API Docker Engine (Daemon) Docker Engine (Daemon) Windows Server

Container Support Linux Container Support (LXC) Examples: GET images/json POST containers/create Development with Containers. DevOps Process with Containers Developers update, iterate, and deploy updated containers 2 3 Operations collaborates with developers to provide app metrics and insights Developers build and test apps in containers,

using development environment i.e. Visual Studio 1 Central Reposito ry 2 Containers pushed to central repository Operations automates deployment and monitors deployed apps from central repository TP3: Windows Server Containers Anatomy and key capabilities Container A

Container B Container C Web tier App tier DB tier Spotlight capabilities Build: write, run, and scale within containers Run: container capabilities built into Windows Server LOB app (+Binaries) LOB app (+Binaries) Libraries (Shared across containers)

Manage: deploy and manage using PowerShell Resources: define resources per container Network: IP options for connectivity Host OS w/Container Support Server (Physical or Virtual) LOB app (+Binaries) Libraries New! In TP4: Hyper-V Containers Anatomy and key capabilities Hyper-V Container Hyper-V Container Spotlight capabilities

Consistency: consistent container APIs Compatibility: identical container images Strong isolation: dedicated kernel copy Highly trusted: proven Hyper-V technology Optimized: virtualization layer and OS optimized App A App A Bins/Libraries Bins/Libraries Windows Guest OS Windows Guest OS Optimized for Hyper-V Container Optimized for Hyper-V Container Hypervisor

Server Container use cases Workload Characteristics Scale out Distributed State separated Rapid (re)start ( ) Distributed Compute Databases Deployment Characteristics

Efficient hosting Multitenancy Rapid deployment Highly automatable Rapid scaling Tasks Scale Out Web Container OS Environments Nano Server Server Core Highly Optimized Highly Compatible Born in the cloud applications Traditional Applications Resources

Resources Download Windows Server 2016 Preview: http:// www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview Explore Technical Preview Documentation: https://technet.microsoft.com/en-us/library/mt420609.aspx Explore Containers Documentation: https://msdn.microsoft.com/virtualization/ Explore Nano Server Documentation: https://technet.microsoft.com/en-us/library/mt126167.aspx Resources II Storage Replica in Windows Server 2016 Technical Preview https://technet.microsoft.com/en-us/library/mt126104.aspx Storage Spaces Direct in Windows Server 2016 Technical Preview https://technet.microsoft.com/en-us/library/mt126109.aspx Windows Server 2016 Technical Preview 4 https://technet.microsoft.com/en-us/library/mt126143.aspx What's New in Windows Server 2016 Technical Preview 4 https://technet.microsoft.com/en-us/library/dn765472.aspx Getting Started with Nano Server

https://technet.microsoft.com/en-US/library/mt126167.aspx Windows Containers Documentation https://msdn.microsoft.com/virtualization/windowscontainers/containers_welcome The Next Generation of Azure Compute Platform with Mark Russinovich => Channel9 Video https://channel9.msdn.com/events/Build/2015/3-618 2015 Microsoft Corporation. All rights reserved. Microsoft, Windows and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. BACKUP Folien Nano Config Nano Server Roles and Features Nano Server folder has a Packages sub-folder Role or feature Package file Hyper-V role Microsoft-NanoServer-Compute-Package.cab Microsoft-NanoServer-FailoverClusterPackage.cab Microsoft-NanoServer-Storage-Package.cab

Failover Clustering File Server role and other storage components Windows Defender Antimalware, including a default Microsoft-NanoServer-Defender-Package.cab signature file Reverse forwarders for application compatibility, for Microsoft-OneCore-ReverseForwardersexample Package.cab common application frameworks such as Ruby, Node.js, etc. Nano Server on a Physical 1 Machine 2 3 Mount the Technical Preview ISO, and, assuming the drive letter for the mounted image is D:, run the following: md C:\NanoServer copy D:\NanoServer\*.ps1 C:\NanoServer Copy the scripts from the mounted image to the new local folder and dot-source them: cd C:\NanoServer . .\Convert-WindowsImage.ps1 . .\New-NanoServerImage.ps1 Generate a VHD from NanoServer.wim, specify computer name and admin password, and

add the Hyper-V (Compute), Failover Clustering, and OEM drivers packages New-NanoServerImage MediaPath D:\ BasePath C:\NanoServer TargetPath C:\ NanoServerVHD Compute OEMDrivers Clustering ComputerName NanoServerVHD AdministratorPassword (Passw0rd! | ConvertTo-SecureString AsPlainText Force) Core PowerShell on Nano Server Built on .NET Core Runtime Lean, composable, open source, cross platform Reduced disk footprint: 55MB total CoreCLR (45MB) + PowerShell (8MB) + Modules (2MB) Full language, subset of features, most cmdlets PowerShell remoting (server-side only) Backwards compatible with existing PowerShell remoting clients to PS 2.0 File transfer over PowerShell remoting Remote script authoring & debugging in ISE Cmdlets for managing Nano Server components Using PowerShell remoting 2 Key Steps: Add IP of the Nano Server to your management computers list of trusted hosts Set-Item WSMan:\localhost\Client\TrustedHosts

$ip = $s = New-PSSession -ComputerName $ip -Credential ~\Administrator Enter-PSSession $s Image Creation Local Repository Container OS ImageC:\Windows\* Container View Image Creation Sandbox Local Repository Container OS Image empty

C:\Windows\* C:\Windows\* Container View Image Creation C:\nodeJS Sandbox Local Repository Container OS Image empty C:\Windows\* C:\Windows\* Container View

Image Creation C:\nodeJS Sandbox Local Repository Container OS Image C:\nodeJs C:\Windows\* C:\Windows\* C:\nodeJS Container View Image Creation Sandbox Local Repository

Container OS Image C:\nodeJs C:\Windows\* Container View Image Creation Application Framework Local Repository Application Framework Container OS Image C:\nodeJs

C:\Windows\* Container View Image Creation Sandbox Application Framework Local Repository Application Framework Container OS Image Empty C:\nodeJs C:\Windows\* C:\Windows\*

C:\nodeJS Container View Image Creation C:\myApp Sandbox Application Framework Local Repository Application Framework Container OS Image Empty C:\nodeJs C:\Windows\*

C:\Windows\* C:\nodeJS Container View Image Creation C:\myApp Application Framework Local Repository Sandbox C:\myApp Application Framework C:\nodeJs Container OS Image

C:\Windows\* C:\Windows\* C:\nodeJS C:\myApp Container View Image Creation Application Framework Local Repository Sandbox C:\myApp Application Framework C:\nodeJs

Container OS Image C:\Windows\* Container View Image Creation Application Framework Local Repository Application Image C:\myApp Application Framework C:\nodeJs Container OS

Image C:\Windows\* Container View Demo. Windows Server Containers and PowerShell Development Process Using Containers Central Repository Applicatio n Framewor k Local Repository Development Process Using Containers Central Repository

Applicatio n Framewor k Local Repository Developers can ch oose desired application frameworks and pull them loca lly fr central repositori om es Applicatio n Framewor k Development Process Using Containers Central Repository

Applicatio n Framewor k Local Repository Developers can ch oose desired application frameworks and pull them loca lly fr central repositori om es Required depend encies are automatically identified and pu lled locally Applicatio n Framewor

k Development Process Using Containers using System; class Program { static void Main() { } } Developers use the same programming languages and environments they are accustomed to Applicatio n Framewor k Local Repository Central Repository

Applicatio n Framewor k Development Process Using Containers using System; class Program { static void Main() { } } Central Repository Applicatio n Framewor k Local Repository

Applications are compiled and assembled in the same way developers are accustomed to Applicatio n Framewor k Development Process Using Containers using System; class Program { static void Main() { } } Central Repository Applicatio n Framewor k

Local Repository An ew ima cont con ge is ainer app tain built in l by t ication g the , he dev writte elop n er Applicatio n Framewor k Development Process Using Containers using System; class Program

{ static void Main() { } } The new app li container im cation a be pushed to ge can now a central repository Applicatio n Framewor k Local Repository Central Repository Applicatio n

Framewor k Development Process Using Containers Central Repository Applicatio n Framewor k Development Process Using Containers Used for unit Share withtesting other developers Central Repository Applicatio n Framewor k

Development Process Using Containers Used for unit Share withtesting other Central Repository Staged for integration or QA Applicatio n Framewor k developers

Recently Viewed Presentations

  • Presentation Template

    Presentation Template

    Global Health EngagementStakeholders and Coordination. Scott Zuerlein. ... Interagency Country Health Team Coordination (as needed) Partner Nation Coordination. ... the subject matter experts who will develop and deliver the training…and those medical professionals who will receive the ...
  • Math 1311

    Math 1311

    Use the linear model to find the value of p in the following coordinate point: (7, p) The pitch of a roof has a slope of 1.5. If the peak of the roof is 12 feet higher than the walls,...
  • Global changes alter soil fungal communities and rates

    Global changes alter soil fungal communities and rates

    Soil fungi use oxidative enzymes to break down lignocellulose in organic matter. Nitrogen addition reduced oxidative enzyme activity by 37.1% (p < 0.001, Z = -4.51, Fig. 3A), but only in studies adding less than 60 kg N ha-1y-1.
  • Video - Las Positas College

    Video - Las Positas College

    Magnetically stored data lasts years, even decades, before deteriorating. Magnetic storage, in the form of a hard disk drive, provides an inexpensive, high-capacity form of permanent storage that acts as the main storage medium for most computer users.
  • Chapter 8

    Chapter 8

    Geometric Boundaries . Part of the northern U.S. boundary with Canada is a 2,100-kilometer (1,300- mile) straight line (more precisely, an arc) along 49° north latitude, . . . established in 1846 by a treaty between the United States and...
  • Fairy Tales Tell about the adventures of a trickster or hero.

    Fairy Tales Tell about the adventures of a trickster or hero.

    Folk Tales include *fairy tales *legends *fables *myths Fairy Tales often tell about the adventures of a hero, heroine, or trickster. Fables are mostly animal stories that teach a lesson or moral. Legends often tell "tall tales" about a character...
  • Capstone project By: Jeannie A McCall Career development

    Capstone project By: Jeannie A McCall Career development

    NSCI 280 ECOLOGY. The project utilizes research and analyze contemporary environmental concerns from a variety of perspectives - historical, social, economic, and scientific - and advocate environmentally responsible course of action. Define the role of ethics in organizational culture. HURS...
  • General Division of Environment PowerPoint Template 3.19

    General Division of Environment PowerPoint Template 3.19

    Saw increase in numbers of performance test observations. 96.4% Compliance rate amongst sourcesinspected. Implementation of new databasesystem. Protect and improve the health and environment of allKansans. Director Updates - Bureau of. Air