Licensing made simple for Windows Server 2016

Introduction

Windows Server 2016 licensing is licensed per-core.  Because processors always have an even number of cores, licenses are sold in two-core packs.  One “license pack” equals (or is good for) two cores.

To run Windows Server 2016, you need to purchase licenses for a minimum of 16 cores, per two physical processors.

This translates to you needing to purchase a minimum of 8x two-core packs for every two physical processors in your server.  This is the equivalent of a regular standard Windows Server 2012 R2 license.

Simple, right?

If you have more than 8 cores per processor, then all you do is purchase 1x two-core license pack for every two cores past the 16 minimum.

Example:  You have a server with two processors.  Each processor has 10 cores.  You have 20 cores total.  You purchase the minimum 8x two-core packs, which covers 16 of your cores.  You need to purchase an additional 2x two-core license packs to cover the extra 4 cores you have.

Question:  Wait, what?  Is it really that easy?

Yes.

Question:  So what if my server has two processors, but each processor only has 6 cores each?

You would still need to purchase the minimum 8x two-core packs, licensing you for a total of 16 cores, even though you only have 12 total cores.  Don’t worry, this still comes out to the same thing as a Standard 2012 R2 licensing.

Question:  What if I have 4 physical processors in my server?

Then you would need to purchase twice the minimum… 16x two-core license packs.  Then you buy 1x two-core license pack for every two cores you have after the minimum combined 32-cores.

Virtualization Rights

Virtualization rights with Windows Server 2016 Standard are, relatively speaking, the same as they are with Windows Server 2012 R2 Standard.

You could install Windows Server 2016 Standard on your physical server, installing and using ONLY the Hyper-V (and supporting) roles/features, and then run two Window Server 2016 virtual machines (VM) on that same physical server, using the same Windows Server 2016 license.

Note 1:  As a general rule, you should never (or rarely) install the Standard edition of Windows Server 2016 on a physical server if you are using it as a Hyper-V host.  You should instead install Hyper-V Server 2016 (Microsoft’s free hypervisor OS).

You can run two virtual machines for every 8x two-core license packs you purchase.  In reality, you’d install Hyper-V Server 2016 on your physical server, and purchase a Windows Server 2016 Standard license (8x two-core license packs) for every two Windows Server 2016 virtual machines you want to run on that host.

Note 2:  Virtualization rights only apply to Windows Server VMs.  You can have any unlimited number of Linux VMs running on any version of Windows, providing your hardware can handle the load.

What about Data center Edition?

In regards to virtualization rights, Windows Server 2016 Data center doesn’t start to make any sense until you see yourself needing to run upwards of 13 virtual machines on a single host.  The exact cutoff is 14 virtual machines, but because each minimum (8x two-core license packs) license gives you 2 VMs, 13 is the same cost as 14.  Purchasing 7 Standard edition licenses to run 13 virtual machines on a single host costs the same amount of money as a Data center edition license.

Note 3:  Data center edition has features that Standard edition does not, such as Storage Spaces Direct and Storage Replica… among quite a few others.  So there are some legitimate reasons reasons to run Windows Server 2016 Data center edition on a Hyper-V Host.

Windows Server 2016 Failover Cluster Licensing

In general, each physical node in a cluster must be licensed for any VM that can run on it.

You can lower the number of physical node licensing by preventing VMs from running on specific nodes.  This is done via “Possible Owners” in Failover Cluster Manager as shown below:

FOCM - Possible Owners

Failover Cluster Manager – Possible Owners setting

Keep in mind that if a VM CAN run on a node, the node MUST be licensed appropriately!

Software Assurance (SA)

If you purchased SA with your server license, you have some additional interesting benefits.  Specifically, “License Mobility” and “Fail-over Rights”.

License Mobility

License Mobility can be particularly useful in the clustering and virtualization world, for example, if you have a two-node cluster with one physical server using Data center (NODE1), a second physical server with the free Hyper-V Server 2016 (NODE2), and the cluster is an active-passive cluster.  For simplicity of this example, NODE2 does not have any running VMs.

With License Mobility, you basically have the freedom to move that Data center license to any server you want as often as you want, within the same Server Farm.  The caveat is that all the Windows Server VMs running on it must follow the DC license (or minus what the other server is already licensed for).  This is useful if you need some planned-downtime of NODE1.  You could then temporarily virtually transfer your Data center license to your other server and live-migrate all of your VMs to the other node to prevent any downtime.  Then you are free to update, upgrade, reboot, or whatever you want to NODE1.

Fail-over Rights

This means that in anticipation of a fail-over event, you may run passive fail-over on another qualifying shared server (NODE2).  Keep in mind that the number of licenses that otherwise would be required to run the passive fail-over Instances must not exceed the number of licenses required to run the corresponding production Instances on the same partner’s shared servers.

References

Microsoft Volume Licensing (direct .doc link):  Microsoft Product Terms – February 1, 2017
Other Languages:  Licensing Terms and Documentation

All Microsoft Products:  Licensing Terms and Documentation

Microsoft Azure Cloud Administrator

Looking to master the core principles of operating a Microsoft Azure-based cloud infrastructure? This learning path is for any technology professional who wants to be involved in the operation and administration of Azure-based solutions and infrastructure. You will learn the fundamentals of implementing, monitoring, and maintaining Microsoft Azure solutions, including major services related to Compute, Storage, Network, and Security. By the end of this learning path, you will be able to implement, monitor, and manage the most commonly used Microsoft Azure services and components, as configured for the most common use cases.

To go deeper follow the deep dive series below.

Azure Cloud Administrator

Primary Skills

Application Management Series

1 hr 6 min

3 hr

2 hr

1 hr

1 hr

3 hr

2 hr

1 hr

1 hr

1 hr

1 hr

Cloud Management Series

15 min

13 min

11 min

1 hr 17 min

1 hr 8 min

4 hr

12 min

6 min

1 hr 18 min

13 min

3 min

52 min

14 min

Device Management Series

1 hr

1 hr 20 min

1 hr 6 min

1 hr 6 min

1 hr 17 min

Identity Management Series

1 hr 5 min

30 min

1 hr 16 min

1 hr 16 min

3 hr

Secondary Skill

Architecture Series

8 hr

7 hr

55 min

Infrastructure – Hybrid/Private Cloud Series

1 hr 20 min

1 hr 10 min

2 hr

2 hr

34 min

5 hr

1 hr 15 min

1 hr 18 min

1 hr 10 min

Infrastructure – Open Source Series

7 min

17 min

11 min

3 hr

1 hr

Infrastructure – Public Cloud Series

3 hr

7 hr

 

1 hr

2 hr

Security & Privacy Series

1 hr 15 min

1 hr 12 min

1 hr

5 hr

3 hr

1 hr

20 min

1 hr

1 hr

2 hr

1 hr

4 hr

DevOps Series

25 min

29 min

59 min

36 min

38 min

30 min

3 hr

4 hr

30 min

4 hr

48 min

7 hr

1 hr 15 min

If you have been following these series and completed it then its time for Microsoft Certification Path. Join our MVA courses on https://mva.microsoft.com and start your cloud career.

Ethical Hacking and Penetration Testing Resources

(Free) Virtual Networks (VPNs)

Custom Personal Targets

Archive/Repository

Books

Programming

Security Courses

Penetration Testing Methodologies, Tools and Technique

Penetration Testing Resources

Exploit Development

OSINT Resources

Social Engineering Resources

Lock Picking Resources

Operating Systems

Tools

Penetration Testing Distributions

  • Kali – GNU/Linux distribution designed for digital forensics and penetration testing.
  • ArchStrike – Arch GNU/Linux repository for security professionals and enthusiasts.
  • BlackArch – Arch GNU/Linux-based distribution for penetration testers and security researchers.
  • Network Security Toolkit (NST) – Fedora-based bootable live operating system designed to provide easy access to best-of-breed open source network security applications.
  • Pentoo – Security-focused live CD based on Gentoo.
  • BackBox – Ubuntu-based distribution for penetration tests and security assessments.
  • Parrot – Distribution similar to Kali, with multiple architecture.
  • Buscador – GNU/Linux virtual machine that is pre-configured for online investigators.
  • Fedora Security Lab – Provides a safe test environment to work on security auditing, forensics, system rescue and teaching security testing methodologies.
  • The Pentesters Framework – Distro organized around the Penetration Testing Execution Standard (PTES), providing a curated collection of utilities that eliminates often unused toolchains.
  • AttifyOS – GNU/Linux distribution focused on tools useful during Internet of Things (IoT) security assessments.

Docker for Penetration Testing

Multi-paradigm Frameworks

  • Metasploit – Software for offensive security teams to help verify vulnerabilities and manage security assessments.
  • Armitage – Java-based GUI front-end for the Metasploit Framework.
  • Faraday – Multiuser integrated pentesting environment for red teams performing cooperative penetration tests, security audits, and risk assessments.
  • ExploitPack – Graphical tool for automating penetration tests that ships with many pre-packaged exploits.
  • Pupy – Cross-platform (Windows, Linux, macOS, Android) remote administration and post-exploitation tool.

Vulnerability Scanners

  • Nexpose – Commercial vulnerability and risk management assessment engine that integrates with Metasploit, sold by Rapid7.
  • Nessus – Commercial vulnerability management, configuration, and compliance assessment platform, sold by Tenable.
  • OpenVAS – Free software implementation of the popular Nessus vulnerability assessment system.
  • Vuls – Agentless vulnerability scanner for GNU/Linux and FreeBSD, written in Go.

Static Analyzers

  • Brakeman – Static analysis security vulnerability scanner for Ruby on Rails applications.
  • cppcheck – Extensible C/C++ static analyzer focused on finding bugs.
  • FindBugs – Free software static analyzer to look for bugs in Java code.
  • sobelow – Security-focused static analysis for the Phoenix Framework.

Web Scanners

  • Nikto – Noisy but fast black box web server and web application vulnerability scanner.
  • Arachni – Scriptable framework for evaluating the security of web applications.
  • w3af – Web application attack and audit framework.
  • Wapiti – Black box web application vulnerability scanner with built-in fuzzer.
  • SecApps – In-browser web application security testing suite.
  • WebReaver – Commercial, graphical web application vulnerability scanner designed for macOS.
  • WPScan – Black box WordPress vulnerability scanner.
  • cms-explorer – Reveal the specific modules, plugins, components and themes that various websites powered by content management systems are running.
  • joomscan – Joomla vulnerability scanner.

Network Tools

  • zmap – Open source network scanner that enables researchers to easily perform Internet-wide network studies.
  • nmap – Free security scanner for network exploration & security audits.
  • pig – GNU/Linux packet crafting tool.
  • scanless – Utility for using websites to perform port scans on your behalf so as not to reveal your own IP.
  • tcpdump/libpcap – Common packet analyzer that runs under the command line.
  • Wireshark – Widely-used graphical, cross-platform network protocol analyzer.
  • Network-Tools.com – Website offering an interface to numerous basic network utilities like ping, traceroute, whois, and more.
  • netsniff-ng – Swiss army knife for for network sniffing.
  • Intercepter-NG – Multifunctional network toolkit.
  • SPARTA – Graphical interface offering scriptable, configurable access to existing network infrastructure scanning and enumeration tools.
  • dnschef – Highly configurable DNS proxy for pentesters.
  • DNSDumpster – Online DNS recon and search service.
  • CloudFail – Unmask server IP addresses hidden behind Cloudflare by searching old database records and detecting misconfigured DNS.
  • dnsenum – Perl script that enumerates DNS information from a domain, attempts zone transfers, performs a brute force dictionary style attack, and then performs reverse look-ups on the results.
  • dnsmap – Passive DNS network mapper.
  • dnsrecon – DNS enumeration script.
  • dnstracer – Determines where a given DNS server gets its information from, and follows the chain of DNS servers.
  • passivedns-client – Library and query tool for querying several passive DNS providers.
  • passivedns – Network sniffer that logs all DNS server replies for use in a passive DNS setup.
  • Mass Scan – TCP port scanner, spews SYN packets asynchronously, scanning entire Internet in under 5 minutes.
  • Zarp – Network attack tool centered around the exploitation of local networks.
  • mitmproxy – Interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
  • Morpheus – Automated ettercap TCP/IP Hijacking tool.
  • mallory – HTTP/HTTPS proxy over SSH.
  • SSH MITM – Intercept SSH connections with a proxy; all plaintext passwords and sessions are logged to disk.
  • Netzob – Reverse engineering, traffic generation and fuzzing of communication protocols.
  • DET – Proof of concept to perform data exfiltration using either single or multiple channel(s) at the same time.
  • pwnat – Punches holes in firewalls and NATs.
  • dsniff – Collection of tools for network auditing and pentesting.
  • tgcd – Simple Unix network utility to extend the accessibility of TCP/IP based network services beyond firewalls.
  • smbmap – Handy SMB enumeration tool.
  • scapy – Python-based interactive packet manipulation program & library.
  • Dshell – Network forensic analysis framework.
  • Debookee – Simple and powerful network traffic analyzer for macOS.
  • Dripcap – Caffeinated packet analyzer.
  • Printer Exploitation Toolkit (PRET) – Tool for printer security testing capable of IP and USB connectivity, fuzzing, and exploitation of PostScript, PJL, and PCL printer language features.
  • Praeda – Automated multi-function printer data harvester for gathering usable data during security assessments.
  • routersploit – Open source exploitation framework similar to Metasploit but dedicated to embedded devices.
  • evilgrade – Modular framework to take advantage of poor upgrade implementations by injecting fake updates.
  • XRay – Network (sub)domain discovery and reconnaissance automation tool.
  • Ettercap – Comprehensive, mature suite for machine-in-the-middle attacks.
  • BetterCAP – Modular, portable and easily extensible MITM framework.

Wireless Network Tools

  • Aircrack-ng – Set of tools for auditing wireless networks.
  • Kismet – Wireless network detector, sniffer, and IDS.
  • Reaver – Brute force attack against WiFi Protected Setup.
  • Wifite – Automated wireless attack tool.
  • Fluxion – Suite of automated social engineering based WPA attacks.

Transport Layer Security Tools

  • SSLyze – Fast and comprehensive TLS/SSL configuration analyzer to help identify security mis-configurations.
  • tls_prober – Fingerprint a server’s SSL/TLS implementation.

Web Exploitation

  • OWASP Zed Attack Proxy (ZAP) – Feature-rich, scriptable HTTP intercepting proxy and fuzzer for penetration testing web applications.
  • Fiddler – Free cross-platform web debugging proxy with user-friendly companion tools.
  • Burp Suite – Integrated platform for performing security testing of web applications.
  • autochrome – Easy to install a test browser with all the appropriate setting needed for web application testing with native Burp support, from NCCGroup.
  • Browser Exploitation Framework (BeEF) – Command and control server for delivering exploits to commandeered Web browsers.
  • Offensive Web Testing Framework (OWTF) – Python-based framework for pentesting Web applications based on the OWASP Testing Guide.
  • WordPress Exploit Framework – Ruby framework for developing and using modules which aid in the penetration testing of WordPress powered websites and systems.
  • WPSploit – Exploit WordPress-powered websites with Metasploit.
  • SQLmap – Automatic SQL injection and database takeover tool.
  • tplmap – Automatic server-side template injection and Web server takeover tool.
  • weevely3 – Weaponized web shell.
  • Wappalyzer – Wappalyzer uncovers the technologies used on websites.
  • WhatWeb – Website fingerprinter.
  • BlindElephant – Web application fingerprinter.
  • wafw00f – Identifies and fingerprints Web Application Firewall (WAF) products.
  • fimap – Find, prepare, audit, exploit and even Google automatically for LFI/RFI bugs.
  • Kadabra – Automatic LFI exploiter and scanner.
  • Kadimus – LFI scan and exploit tool.
  • liffy – LFI exploitation tool.
  • Commix – Automated all-in-one operating system command injection and exploitation tool.
  • DVCS Ripper – Rip web accessible (distributed) version control systems: SVN/GIT/HG/BZR.
  • GitTools – Automatically find and download Web-accessible .git repositories.
  • sslstrip – Demonstration of the HTTPS stripping attacks.
  • sslstrip2 – SSLStrip version to defeat HSTS.

Hex Editors

  • HexEdit.js – Browser-based hex editing.
  • Hexinator – World’s finest (proprietary, commercial) Hex Editor.
  • Frhed – Binary file editor for Windows.
  • 0xED – Native macOS hex editor that supports plug-ins to display custom data types.

File Format Analysis Tools

  • Kaitai Struct – File formats and network protocols dissection language and web IDE, generating parsers in C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby.
  • Veles – Binary data visualization and analysis tool.
  • Hachoir – Python library to view and edit a binary stream as tree of fields and tools for metadata extraction.

Defense Evasion Tools

  • Veil – Generate metasploit payloads that bypass common anti-virus solutions.
  • shellsploit – Generates custom shellcode, backdoors, injectors, optionally obfuscates every byte via encoders.
  • Hyperion – Runtime encryptor for 32-bit portable executables (“PE .exes”).
  • AntiVirus Evasion Tool (AVET) – Post-process exploits containing executable files targeted for Windows machines to avoid being recognized by antivirus software.
  • peCloak.py – Automates the process of hiding a malicious Windows executable from antivirus (AV) detection.
  • peCloakCapstone – Multi-platform fork of the peCloak.py automated malware antivirus evasion tool.
  • UniByAv – Simple obfuscator that takes raw shellcode and generates Anti-Virus friendly executables by using a brute-forcable, 32-bit XOR key.

Hash Cracking Tools

  • John the Ripper – Fast password cracker.
  • Hashcat – The more fast hash cracker.
  • CeWL – Generates custom wordlists by spidering a target’s website and collecting unique words.

Windows Utilities

  • Sysinternals Suite – The Sysinternals Troubleshooting Utilities.
  • Windows Credentials Editor – Inspect logon sessions and add, change, list, and delete associated credentials, including Kerberos tickets.
  • mimikatz – Credentials extraction tool for Windows operating system.
  • PowerSploit – PowerShell Post-Exploitation Framework.
  • Windows Exploit Suggester – Detects potential missing patches on the target.
  • Responder – LLMNR, NBT-NS and MDNS poisoner.
  • Bloodhound – Graphical Active Directory trust relationship explorer.
  • Empire – Pure PowerShell post-exploitation agent.
  • Fibratus – Tool for exploration and tracing of the Windows kernel.
  • wePWNise – Generates architecture independent VBA code to be used in Office documents or templates and automates bypassing application control and exploit mitigation software.
  • redsnarf – Post-exploitation tool for retrieving password hashes and credentials from Windows workstations, servers, and domain controllers.
  • Magic Unicorn – Shellcode generator for numerous attack vectors, including Microsoft Office macros, PowerShell, HTML applications (HTA), or certutil (using fake certificates).

GNU/Linux Utilities

macOS Utilities

  • Bella – Pure Python post-exploitation data mining and remote administration tool for macOS.

DDoS Tools

  • LOIC – Open source network stress tool for Windows.
  • JS LOIC – JavaScript in-browser version of LOIC.
  • SlowLoris – DoS tool that uses low bandwidth on the attacking side.
  • HOIC – Updated version of Low Orbit Ion Cannon, has ‘boosters’ to get around common counter measures.
  • T50 – Faster network stress tool.
  • UFONet – Abuses OSI layer 7 HTTP to create/manage ‘zombies’ and to conduct different attacks using; GET/POST, multithreading, proxies, origin spoofing methods, cache evasion techniques, etc.

Social Engineering Tools

  • Social Engineer Toolkit (SET) – Open source pentesting framework designed for social engineering featuring a number of custom attack vectors to make believable attacks quickly.
  • King Phisher – Phishing campaign toolkit used for creating and managing multiple simultaneous phishing attacks with custom email and server content.
  • Evilginx – MITM attack framework used for phishing credentials and session cookies from any Web service.
  • wifiphisher – Automated phishing attacks against WiFi networks.
  • Catphish – Tool for phishing and corporate espionage written in Ruby.

OSINT Tools

  • Maltego – Proprietary software for open source intelligence and forensics, from Paterva.
  • theHarvester – E-mail, subdomain and people names harvester.
  • creepy – Geolocation OSINT tool.
  • metagoofil – Metadata harvester.
  • Google Hacking Database – Database of Google dorks; can be used for recon.
  • Google-dorks – Common Google dorks and others you probably don’t know.
  • GooDork – Command line Google dorking tool.
  • dork-cli – Command line Google dork tool.
  • Censys – Collects data on hosts and websites through daily ZMap and ZGrab scans.
  • Shodan – World’s first search engine for Internet-connected devices.
  • recon-ng – Full-featured Web Reconnaissance framework written in Python.
  • github-dorks – CLI tool to scan github repos/organizations for potential sensitive information leak.
  • vcsmap – Plugin-based tool to scan public version control systems for sensitive information.
  • Spiderfoot – Multi-source OSINT automation tool with a Web UI and report visualizations
  • BinGoo – GNU/Linux bash based Bing and Google Dorking Tool.
  • fast-recon – Perform Google dorks against a domain.
  • snitch – Information gathering via dorks.
  • Sn1per – Automated Pentest Recon Scanner.
  • Threat Crowd – Search engine for threats.
  • Virus Total – VirusTotal is a free service that analyzes suspicious files and URLs and facilitates the quick detection of viruses, worms, trojans, and all kinds of malware.
  • DataSploit – OSINT visualizer utilizing Shodan, Censys, Clearbit, EmailHunter, FullContact, and Zoomeye behind the scenes.
  • AQUATONE – Subdomain discovery tool utilizing various open sources producing a report that can be used as input to other tools.

Anonymity Tools

  • Tor – Free software and onion routed overlay network that helps you defend against traffic analysis.
  • OnionScan – Tool for investigating the Dark Web by finding operational security issues introduced by Tor hidden service operators.
  • I2P – The Invisible Internet Project.
  • Nipe – Script to redirect all traffic from the machine to the Tor network.
  • What Every Browser Knows About You – Comprehensive detection page to test your own Web browser’s configuration for privacy and identity leaks.

Reverse Engineering Tools

  • Interactive Disassembler (IDA Pro) – Proprietary multi-processor disassembler and debugger for Windows, GNU/Linux, or macOS; also has a free version, IDA Free.
  • WDK/WinDbg – Windows Driver Kit and WinDbg.
  • OllyDbg – x86 debugger for Windows binaries that emphasizes binary code analysis.
  • Radare2 – Open source, crossplatform reverse engineering framework.
  • x64dbg – Open source x64/x32 debugger for windows.
  • Immunity Debugger – Powerful way to write exploits and analyze malware.
  • Evan’s Debugger – OllyDbg-like debugger for GNU/Linux.
  • Medusa – Open source, cross-platform interactive disassembler.
  • plasma – Interactive disassembler for x86/ARM/MIPS. Generates indented pseudo-code with colored syntax code.
  • peda – Python Exploit Development Assistance for GDB.
  • dnSpy – Tool to reverse engineer .NET assemblies.
  • binwalk – Fast, easy to use tool for analyzing, reverse engineering, and extracting firmware images.
  • PyREBox – Python scriptable Reverse Engineering sandbox by Cisco-Talos.
  • Voltron – Extensible debugger UI toolkit written in Python.
  • Capstone – Lightweight multi-platform, multi-architecture disassembly framework.

Physical Access Tools

  • LAN Turtle – Covert “USB Ethernet Adapter” that provides remote access, network intelligence gathering, and MITM capabilities when installed in a local network.
  • USB Rubber Ducky – Customizable keystroke injection attack platform masquerading as a USB thumbdrive.
  • Poisontap – Siphons cookies, exposes internal (LAN-side) router and installs web backdoor on locked computers.
  • WiFi Pineapple – Wireless auditing and penetration testing platform.
  • Proxmark3 – RFID/NFC cloning, replay, and spoofing toolkit often used for analyzing and attacking proximity cards/readers, wireless keys/keyfobs, and more.

Side-channel Tools

  • ChipWhisperer – Complete open-source toolchain for side-channel power analysis and glitching attacks.

CTF Tools

  • ctf-tools – Collection of setup scripts to install various security research tools easily and quickly deployable to new machines.
  • Pwntools – Rapid exploit development framework built for use in CTFs.
  • RsaCtfTool – Decrypt data enciphered using weak RSA keys, and recover private keys from public keys using a variety of automated attacks.

Penetration Testing Report Templates

Books

Penetration Testing Books

Hackers Handbook Series

Defensive Development

Network Analysis Books

Reverse Engineering Books

Malware Analysis Books

Windows Books

Social Engineering Books

Lock Picking Books

Defcon Suggested Reading

Vulnerability Databases

  • Common Vulnerabilities and Exposures (CVE) – Dictionary of common names (i.e., CVE Identifiers) for publicly known security vulnerabilities.
  • National Vulnerability Database (NVD) – United States government’s National Vulnerability Database provides additional meta-data (CPE, CVSS scoring) of the standard CVE List along with a fine-grained search engine.
  • US-CERT Vulnerability Notes Database – Summaries, technical details, remediation information, and lists of vendors affected by software vulnerabilities, aggregated by the United States Computer Emergency Response Team (US-CERT).
  • Full-Disclosure – Public, vendor-neutral forum for detailed discussion of vulnerabilities, often publishes details before many other sources.
  • Bugtraq (BID) – Software security bug identification database compiled from submissions to the SecurityFocus mailing list and other sources, operated by Symantec, Inc.
  • Exploit-DB – Non-profit project hosting exploits for software vulnerabilities, provided as a public service by Offensive Security.
  • Microsoft Security Bulletins – Announcements of security issues discovered in Microsoft software, published by the Microsoft Security Response Center (MSRC).
  • Microsoft Security Advisories – Archive of security advisories impacting Microsoft software.
  • Mozilla Foundation Security Advisories – Archive of security advisories impacting Mozilla software, including the Firefox Web Browser.
  • Packet Storm – Compendium of exploits, advisories, tools, and other security-related resources aggregated from across the industry.
  • CXSecurity – Archive of published CVE and Bugtraq software vulnerabilities cross-referenced with a Google dork database for discovering the listed vulnerability.
  • SecuriTeam – Independent source of software vulnerability information.
  • Vulnerability Lab – Open forum for security advisories organized by category of exploit target.
  • Zero Day Initiative – Bug bounty program with publicly accessible archive of published security advisories, operated by TippingPoint.
  • Vulners – Security database of software vulnerabilities.
  • Inj3ct0r (Onion service) – Exploit marketplace and vulnerability information aggregator.
  • Open Source Vulnerability Database (OSVDB) – Historical archive of security vulnerabilities in computerized equipment, no longer adding to its vulnerability database as of April, 2016.
  • HPI-VDB – Aggregator of cross-referenced software vulnerabilities offering free-of-charge API access, provided by the Hasso-Plattner Institute, Potsdam.

Security Courses

Information Security Conferences

  • DEF CON – Annual hacker convention in Las Vegas.
  • Black Hat – Annual security conference in Las Vegas.
  • BSides – Framework for organising and holding security conferences.
  • CCC – Annual meeting of the international hacker scene in Germany.
  • DerbyCon – Annual hacker conference based in Louisville.
  • PhreakNIC – Technology conference held annually in middle Tennessee.
  • ShmooCon – Annual US East coast hacker convention.
  • CarolinaCon – Infosec conference, held annually in North Carolina.
  • CHCon – Christchurch Hacker Con, Only South Island of New Zealand hacker con.
  • SummerCon – One of the oldest hacker conventions, held during Summer.
  • Hack.lu – Annual conference held in Luxembourg.
  • Hackfest – Largest hacking conference in Canada.
  • HITB – Deep-knowledge security conference held in Malaysia and The Netherlands.
  • Troopers – Annual international IT Security event with workshops held in Heidelberg, Germany.
  • Hack3rCon – Annual US hacker conference.
  • ThotCon – Annual US hacker conference held in Chicago.
  • LayerOne – Annual US security conference held every spring in Los Angeles.
  • DeepSec – Security Conference in Vienna, Austria.
  • SkyDogCon – Technology conference in Nashville.
  • SECUINSIDE – Security Conference in Seoul.
  • DefCamp – Largest Security Conference in Eastern Europe, held annually in Bucharest, Romania.
  • AppSecUSA – Annual conference organized by OWASP.
  • BruCON – Annual security conference in Belgium.
  • Infosecurity Europe – Europe’s number one information security event, held in London, UK.
  • Nullcon – Annual conference in Delhi and Goa, India.
  • RSA Conference USA – Annual security conference in San Francisco, California, USA.
  • Swiss Cyber Storm – Annual security conference in Lucerne, Switzerland.
  • Virus Bulletin Conference – Annual conference going to be held in Denver, USA for 2016.
  • Ekoparty – Largest Security Conference in Latin America, held annually in Buenos Aires, Argentina.
  • 44Con – Annual Security Conference held in London.
  • BalCCon – Balkan Computer Congress, annually held in Novi Sad, Serbia.
  • FSec – FSec – Croatian Information Security Gathering in Varaždin, Croatia.

Information Security Magazines

Awesome Lists

Credit and Original Location: https://github.com/enaqx/awesome-pentest

This article has been provided for educational purpose only.

Cloud Adaptation and SharePoint Server Test Lab Guide for IT Professionals

Use these cloud adoption Test Lab Guides (TLGs) to set up demonstration or dev/test environments for Office 365, Enterprise Mobility + Security (EMS), Dynamics 365, and Office Server products.

TLGs help you quickly learn about Microsoft products. They’re great for situations where you need to evaluate a technology or configuration before you decide whether it’s right for you or before you roll it out to users. The “I built it out myself and it works” hands-on experience helps you understand the deployment requirements of a new product or solution so you can better plan for hosting it in production.

TLGs also create representative environments for development and testing of applications, also known as dev/test environments.

Test Lab Guides in the Microsoft Cloud

See these additional resources before diving in:

Use these articles to build your Office 365 dev/test environment:

  • Base Configuration dev/test environment

    Create a simplified intranet running in Microsoft Azure infrastructure services. This is an optional step if you want to build a simulated enterprise configuration.

  • Office 365 dev/test environment

    Create an Office 365 Enterprise E5 trial subscription, which you can do from your computer or from a simplified intranet running in Azure infrastructure services.

  • DirSync for your Office 365 dev/test environment

    Install and configure Azure AD Connect for directory synchronization with password synchronization. This is an optional step if you want to build a simulated enterprise configuration.

For your Office 365 dev/test environment, use these articles to demonstrate enterprise features of Office 365:

Create a dev/test environment for Microsoft 365 Enterprise scenarios with these articles:

Add a Dynamics 365 trial subscription and test Office 365 and Dynamics 365 integrated features and scenarios with these articles:

Create a dev/test environment that includes all of Microsoft’s cloud offerings: Office 365, Azure, EMS, and Dynamics 365. See The One Microsoft Cloud dev/test environment for the step-by-step instructions.

You can create a cross-premises dev/test environment, which includes an Azure virtual network and a simulated on-premises network, with these articles:

Here are additional cloud-based dev/test environments that you can create in Azure infrastructure services:

Join the discussion

Contact us Description
What cloud adoption content do you need? We are creating content for cloud adoption that spans multiple Microsoft cloud platforms and services. Let us know what you think about our cloud adoption content, or ask for specific content by sending email to cloudadopt@microsoft.com.
Join the cloud adoption discussion If you are passionate about cloud-based solutions, consider joining the Cloud Adoption Advisory Board (CAAB) to connect with a larger, vibrant community of Microsoft content developers, industry professionals, and customers from around the globe. To join, add yourself as a member of the CAAB (Cloud Adoption Advisory Board) space of the Microsoft Tech Community and send us a quick email at CAAB@microsoft.com. Anyone can read community-related content on the CAAB blog. However, CAAB members get invitations to private webinars that describe new cloud adoption resources and solutions.
Get the art you see here If you want an editable copy of the art you see in this article, we’ll be glad to send it to you. Email your request, including the URL and title of the art, to cloudadopt@microsoft.com.

Free Step by Step SharePoint Server 2013 Lab Guides by Microsoft

This Post contains a bunch of Free Step by Step SharePoint Server 2013 Lab Guides that Microsoft gives for free on its Download Center. Usually I post them together with the other free resources that Microsoft offers, however this is a Test Lab Guide (TLG) only post, and the rest of the resources will come later in the month.

Free Step by Step SharePoint Server 2013 Lab Guides by Microsoft

Microsoft Download Center

  1. Test Lab Guide: Configure SharePoint Server 2013 in a three-tier farm
  2. Test Lab Guide: Configure intranet and team sites for SharePoint Server 2013
  3. Test Lab Guide: Demonstrate permissions with SharePoint Server 2013
  4. Test Lab Guide: Demonstrate profile synchronization for SharePoint Server 2013
  5. Test Lab Guide: Demonstrate Social Features for SharePoint Server 2013
  6. Test Lab Guide: Demonstrate SAML-based Claims Authentication with SharePoint Server 2013
  7. Test Lab Guide: Demonstrate forms-based claims authentication for SharePoint Server 2013
  8. Test Lab Guide: Configure eDiscovery for SharePoint Server 2013
  9. Test Lab Guide: Create a Business Intelligence Baseline Environment
  10. Test Lab Guide: Configure Secure Store
  11. Test Lab Guide: Configure Excel Services
  12. Test Lab Guide: Configure the Excel Services unattended service account
  13. Test Lab Guide: Configure Excel Services data refresh by using an embedded connection
  14. Test Lab Guide: Configure Excel Services data refresh by using an external connection
  15. Test Lab Guide: Configure Visio Services
  16. Test Lab Guide: Configure the Visio Services unattended service account
  17. Test Lab Guide: Configure Visio Services data refresh using an external connection
  18. Test Lab Guide: Configure PerformancePoint Services
  19. Test Lab Guide: Configure data access for PerformancePoint Services
  20. Test Lab Guide Mini-Module: Configuring a Second SharePoint Server 2013 Farm 
  21. Test Lab Guide: Configure a Highly Available SharePoint Server 2013 Search Topology
  22. Test Lab Guide: Configure an Integrated Exchange, Lync, and SharePoint Test Lab

A very nice Poster from Microsoft that resumes the above http://www.microsoft.com/en-ca/download/details.aspx?id=39298
Teched North America

  1. Configuring Office Web Applications for Microsoft SharePoint 2013 
  2. Configuring Social Features in Microsoft SharePoint 2013 
  3. Extending the Search Experience in Microsoft SharePoint 2013 
  4. Introduction to Web Content Management in Microsoft SharePoint 2013 
  5. Designing a Microsoft SharePoint 2013 Site 

Migrate Google G-Suite mailboxes to Office 365

Migrate your IMAP mailboxes to Office 365 gives you an overview of the migration process. Read it first and when you’re familiar with the contents of that article, return to this topic to learn how to migrate mailboxes from G Suite (formerly known as Google Apps) Gmail to Office 365. You must be a global admin in Office 365 to complete IMAP migration steps.

Looking for Windows PowerShell commands? See User PowerShell to perform an IMAP migration to Office 365.

Want to migrate other types of IMAP mailboxes? See Migrate other types of IMAP mailboxes to Office 365 .

Migration from G Suite mailboxes using the Office 365 admin center

You can use the setup wizard in the Office 365 admin center for an IMAP migration. See IMAP migration in the Office 365 admin center for instructions.

IMPORTANT: IMAP migration will only migrate emails, not calendar and contact information. Users can import their own email, contacts, and other mailbox information to Office 365. See Migrate email and contacts to Office 365 for Business to learn how.

Before Office 365 can connect to Gmail or G Suites, all the account owners need to create an app password to access their account. This is because Google considers Outlook to be a less secure app and will not allow a connection to it with a password alone. For instructions, see Prepare your G Suite account for connecting to Outlook and Office 365. You’ll also need to make sure your G Suite users can turn on 2-step verification.

Gmail Migration tasks

The following list contains the migration tasks given in the order in which you should complete them.

Step 1: Verify you own your domain

In this task, you’ll first verify to Office 365 that you own the domain you used for your G Suite accounts.

Notes:

  • Another option is to use the your company name.onmicrosoft.com domain that is included with your Office 365 subscription instead of using your own custom domain. In that case, you can just add users as described in Create users in Office 365 and omit this task.
  • Most people, however, prefer to use their own domain.

Domain verification is a task you will go through as you setup Office 365. During the setup Office 365 setup wizard provides you with a TXT record you will add at your domain host provider. See Verify your domain in Office 365 for the steps to complete in Office 365 admin center, and choose a domain registrar from the two following options to see how to complete add the TXT record that your DNS host provider.

  • Your current DNS host provider is Google.    If you purchased your domain from Google and they are the DNS hosting provider, follow these instructions: Create DNS records when your domain is managed by Google.
  • You purchased your domain from another domain registrar.    If you purchased your domain from a different company, we provide instructions for many popular domain hosting providers.

Step 2: Add users to Office 365

You can add your users either one at a time, or several users at a time. When you add users you also add licenses to them. Each user has to have a mailbox on Office 365 before you can migrate email to it. Each user also needs a license that includes an Exchange Online plan to use his or her mailbox.

Important: At this point you have verified that you own the domain and created your G Suite users and mailboxes in Office 365 with your custom domain. Close the wizard at this step. Do not proceed to Set up domain, until your Gmail mailboxes are migrated to Office 365. You’ll finish the setup steps in task 7, Route Gmail directly to Office 365.

Step 3: Create a list of Gmail mailboxes to migrate

For this task, you create a migration file that contains a list of Gmail mailboxes to migrate to Office 365. The easiest way to create the migration file is by using Excel, so we use Excel in these instructions. You can use Excel 2013, Excel 2010, or Excel 2007.

When you create the migration file, you need to know the password of each Gmail mailbox that you want to migrate. We’re assuming you don’t know the user passwords, so you’ll probably need to assign temporary passwords (by resetting the passwords) to all mailboxes during the migration. You must be an administrator in G Suite to reset passwords.

You don’t have to migrate all Gmail mailboxes at once. You can do them in batches at your convenience. You can include up to 50,000 mailboxes (one row for each user) in your migration file. The file can be as large as 10 MB.

  1. Sign in to G Suite admin console using your administrator username and password.
  2. After you’re signed in, choose Users.

    List of users in the Google admin center.

  3. Select each user to identify each user’s email address. Write down the address.

    User details in the Google apps admin center

  4. Sign in to the Office 365 admin center, and go to Users > Active users. Keep an eye on the User name column. You’ll use this information in a minute. Keep the Office 365 admin center window open, too.

    User Name column in the Office 365 admin center.

  5. Start Excel.
  6. Use the following screenshot as a template to create the migration file in Excel. Start with the headings in row 1. Make sure they match the picture exactly and don’t contain spaces. The exact heading names are:
    • EmailAddress in cell A1.
    • UserName in cell B1.
    • Password in cell C1.

      Cell headings in the Excel migration file.

  7. Next enter the email address, user name, and password for each mailbox you want to migrate. Enter one mailbox per row.
    • Column A is the email address of the Office 365 mailbox. This is what’s shown in the User name column in Users > Active users in the Office 365 admin center.
    • Column B is the sign-in name for the user’s Gmail mailbox—for example, alberta@contoso.com.
    • Column C is the app password for the user’s Gmail mailbox. Creating the app password is described in Migration from G Suite mailboxes using the Office 365 admin center.

      A completed sample migration file.

  8. Save the file as a CSV file type, and then close Excel.

    Shows the Save As CSV option in Excel.

Step 4: Connect Office 365 to Gmail

To migrate Gmail mailboxes successfully, Office 365 needs to connect and communicate with Gmail. To do this, Office 365 uses a migration endpoint. Migration endpoint is a technical term that describes the settings that are used to create the connection so you can migrate the mailboxes. You create the migration endpoint in this task.

  1. Go to the Exchange admin center.
  2. In the EAC, go to Recipients > Migration > More More icon > Migration endpoints.

    Select Migration endpoint.

  3. Click New New icon to create a new migration endpoint.
  4. On the Select the migration endpoint type page, choose IMAP.
  5. On the IMAP migration configuration page, set IMAP server to imap.gmail.com and keep the default settings the same.
  6. Click Next. The migration service uses the settings to test the connection to Gmail system. If the connection works, the Enter general information page opens.
  7. On the Enter general information page, type a Migration endpoint name, for example, Test5-endpoint. Leave the other two boxes blank to use the default values.

    Migration endpoint name.

  8. Click New to create the migration endpoint.

Step 5: Create a migration batch and start migrating Gmail mailboxes

You use a migration batch to migrate groups of Gmail mailboxes to Office 365 at the same time. The batch consists of the Gmail mailboxes that you listed in the migration file in the previous task.

Tips:

  • It’s a good idea to create a test migration batch with a small number of mailboxes to first test the process.
  • Use migration files with the same number of rows, and run the batches at similar times during the day. Then compare the total running time for each test batch. This helps you estimate how long it could take to migrate all your mailboxes, how large each migration batch should be, and how many simultaneous connections to the source email system you should use to balance migration speed and Internet bandwidth.
  1. In the Office 365 admin center, navigate to Admin centers > Exchange.

    Go to Exchange admin center.

  2. In the Exchange admin center, go to Recipients > Migration.
  3. Click New New icon > Migrate to Exchange Online.

    Select Migrate to Exchange Online

  4. Choose IMAP migration > Next.
  5. On the Select the users page, click Browse to specify the migration file you created. After you select your migration file, Office 365 checks it to make sure:
    • It isn’t empty.
    • It uses comma-separated formatting.
    • It doesn’t contain more than 50,000 rows.
    • It includes the required attributes in the header row.
    • It contains rows with the same number of columns as the header row.

    If any one of these checks fails, you’ll get an error that describes the reason for the failure. If you get an error, you must fix the migration file and resubmit it to create a migration batch.

  6. After Office 365 validates the migration file, it displays the number of users listed in the file as the number of Gmail mailboxes to migrate.

    New migration batch with CSV file

  7. Click Next.
  8. On the Set the migration endpoint page, select the migration endpoint that you created in the previous step, and click Next.
  9. On the IMAP migration configuration page, accept the default values, and click Next.
  10. On the Move configuration page, type the name (no spaces or special characters) of the migration batch in the box—for example, Test5-migration. The default migration batch name that’s displayed is the name of the migration file that you specified. The migration batch name is displayed in the list on the migration dashboard after you create the migration batch.

    You can also enter the names of the folders you want to exclude from migration. For example, Shared, Junk Email, and Deleted. Click Add Add icon to add them to the excluded list. You can also use the edit icon Add icon to change a folder name and the remove icon Remove icon to delete the folder name.

    Move configuration dialog

  11. Click Next
  12. On the Start the batch page, do the following:
    • Choose Browse to send a copy of the migration reports to other users. By default, migration reports are emailed to you. You can also access the migration reports from the properties page of the migration batch.
    • Choose Automatically start the batch > new. The migration starts immediately with the status Syncing.

      Micgration batch is syncing

Note: If the status shows Syncing for a long time, you may be experiencing bandwidth limits set by Google. For more information, see Bandwidth limits.

Verify that the migration worked

  • In the Exchange admin center, go to Recipients > Migration. Verify that the batch is displayed in the migration dashboard. If the migration completed successfully, the status is Synced.
  • If this task fails, check the associated Mailbox status reports for specific errors, and double-check that your migration file has the correct Office 365 email address in the EmailAddress column.

Verify a successful mailbox migration to Office 365

  • Ask your migrated users to complete the following tasks:
    • Go to the Office 365 sign-in page, and sign in with your user name and temporary password.
    • Update your password, and set your time zone. It’s important that you select the correct time zone to make sure your calendar and email settings are correct.
    • When Outlook Web App opens, send an email message to another Office 365 user to verify that you can send email.
    • Choose Outlook, and check that your email messages and folders are all there.

Optional: Reduce email delays

Although this task is optional, doing it can help avoid delays in the receiving email in the new Office 365 mailboxes.

When people outside of your organization send you email, their email systems don’t double-check where to send that email every time. Instead, their systems save the location of your email system based on a setting in your DNS server known as a time-to-live (TTL). If you change the location of your email system before the TTL expires, the sender’s email system tries to send email to the old location before figuring out that the location changed. This can result in a mail delivery delay. One way to avoid this is to lower the TTL that your DNS server gives to servers outside of your organization. This will make the other organizations refresh the location of your email system more often.

Most email systems ask for an update each hour if a short interval such as 3,600 seconds (one hour) is set. We recommend that you set the interval at least this low before you start the email migration. This setting allows all the systems that send you email enough time to process the change. Then, when you make the final switch over to Office 365, you can change the TTL back to a longer interval.

The place to change the TTL setting is on your email system’s mail exchanger record, also called an MX record. This lives in your public facing DNS. If you have more than one MX record, you need to change the value on each record to 3,600 seconds or less.

Don’t worry if you skip this task. It might take longer for email to start showing up in your new Office 365 mailboxes, but it will get there.

If you need some help configuring your DNS settings, see Create DNS records for Office 365 when you manage your DNS records.

Step 6: Update your DNS records to route Gmail directly to Office 365

Email systems use a DNS record called an MX record to figure out where to deliver email. During the email migration process, your MX record was pointing to your Gmail system. Now that you’ve completed your email migration to Office 365, it’s time to point your MX record to Office 365. After you change your MX record following these steps, email sent to users at your custom domain is delivered to Office 365 mailboxes

For many DNS providers, there are specific instructions to change your MX record, see Create DNS records for Office 365 when you manage your DNS records for instructions. If your DNS provider isn’t included, or if you want to get a sense of the general directions, general MX record instructions are provided as well. See Create DNS records at any DNS hosting provider for Office 365 for instructions.

  1. Sign in to Office 365 with your work or school account.
  2. Go to the Domains page.
  3. Select your domain and then choose Fix issues.

    The status shows Fix issues because you stopped the wizard partway through so you could migrate your Gmail email to Office 365 before switching your MX record.

    Domain that needs to be fixed.

  4. For each DNS record type that you need to add, choose What do I fix?, and follow the instructions to add the records for Office 365 services.
  5. After you’ve added all the records, you’ll see a message that your domain is set up correctly: Contoso.com is set up correctly. No action is required.

It can take up to 72 hours for the email systems of your customers and partners to recognize the changed MX record. Wait at least 72 hours before you proceed to stopping synchronization with Gmail.

Step 7: Stop synchronization with Gmail

During the last task, you updated the MX record for your domain. Now it’s time to verify that all email is being routed to Office 365. After verification, you can delete the migration batch and stop the synchronization between Gmail and Office 365. Before you take this step:

  • Make sure that your users are using Office 365 exclusively for email. After you delete the migration batch, email that is sent to Gmail mailboxes isn’t copied to Office 365 This means your users can’t get that email, so make sure that all users are on the new system.
  • Let the migration batch run for at least 72 hours before you delete it. This makes the following two things more likely:
    • Your Gmail mailboxes and Office 365 mailboxes have synchronized at least once (they synchronize once a day).
    • The email systems of your customers and partners have recognized the changes to your MX records and are now properly sending email to your Office 365 mailboxes.

When you delete the migration batch, the migration service cleans up any records related to the migration batch and removes it from the migration dashboard.

Delete a migration batch

  1. In the Exchange admin center, go to Recipients > Migration.
  2. On the migration dashboard, select the batch, and then click Delete.

How do you know this worked?

  • In the Exchange admin center, navigate to Recipients > Migration. Verify that the migration batch no longer is listed on the migration dashboard.
Step 8: Users migrate their calendar and contacts

After your migrate their email, users can import their Gmail calendar and contacts to Outlook:

SharePoint 3-tier On Premises installation

This post is a complete solution for setting up SharePoint 2016 on-premise. Most of the documents available online do not provide a complete solution/steps to install SharePoint 2016 with all of the prerequisites to be installed manually. Also, troubleshooting steps are provided for some of the most common mistakes done while installing.

The Three-Tier Architecture

https://i-technet.sec.s-msft.com/dynimg/IC378512.gif

This post is based on the setup on to a virtual machine based environment and will guide you how to set up the SharePoint server 2016 in your VM environment. If you are looking for the similar setup into your physical machines then please do consider checking hardware and software requirements required, supported system, storage system and finally networking.

This document is a simplified guide written to cover all aspects and scenarios encountered while setting up the prerequisites for SharePoint 2016, system requirements, errors and problems faced during set up.

In this guide we will use Windows 2k12 R2 server as an example and explain the detailed procedure.

Prerequisites:

1. A Windows 2k12 server file (Windows_server_2012_r2_with_update_x64_dvd.iso)

According to your requirement and the installation scenario use the below configurations:

Installation scenario Deployment type and scale RAM Processor Hard disk space
Single server role that uses SQL Server Development or evaluation installation of SharePoint Server 2016 with the minimum recommended services for development environments. Use the Single-Server farm role that will let you choose which service applications to provision. For additional information on Single-Server farm role, see Overview of MinRole Server Roles in SharePoint Server 2016 16 GB 64-bit, 4 cores 80 GB for system drive100 GB for second drive
Single server role that uses SQL Server Pilot or user acceptance test installation of SharePoint Server 2016 running all available services for development environments. 24 GB 64-bit, 4 cores 80 GB for system drive100 GB for second drive and additional drives
Web server or application server in a three-tier farm Development or evaluation installation of SharePoint Server 2016 with a minimum number of services. 12 GB 64-bit, 4 cores 80 GB for system drive80 GB for second drive
Web server or application server in a three-tier farm Pilot, user acceptance test, or production deployment of SharePoint Server 2016 running all available services. 16 GB 64-bit, 4 cores 80 GB for system drive80 GB for second drive and additional drives
https://technet.microsoft.com/en-us/library/cc262485(v=office.16).aspx#Anchor_1

2. Active Directory Server in the same Domain where you will be installing SharePoint 2016 Server

3. SQL Server 2014 (SQLServer2014SP2-Full-x64-ENU.iso)

4. SharePoint server 2016 with license (SharePoint_server_2016_x64_dvd_8419458.iso)

Note: Here in this lab I have used SharePoint 2016 180 days trial licenses and into windows servers I have used my MSDN licenses but in your environment you may have different types of licenses so take a brief look at its limitations as well before using them.

Install Windows 10, and Windows Server 2012 R2 update: April 2016.

All the steps are explained in detail with pictures below:

Step 1:

As we are using Windows Server 2012 R2 in our example, let’s update the server with all the latest available updates, the updates can vary depending upon the win2k12 r2 iso file you have.

It’s recommended to install all the latest available updates from Microsoft.

updates_windows2k12

Once the system is updated restart the server.

Step 2:

Install the Active Directory server on Win2k12 R2 server. The step by step procedure to set it up is given here: https://support.rackspace.com/how-to/installing-active-directory-on-windows-server-2012/.

AD server is one of the prerequisites which needs to be installed on the same domain where the SharePoint server will be installed.

Once AD is installed properly the System is ready for step 3.

Step 3

In order to install and configure SQL server 2014 you need to install .NET framework 3.5 first, which can be installed as shown below.

* Click on Server Manager–>Manage–>Add Roles and Features–>Select Features tab as shown below.

DOTNET3.5 install * Select .NET Framework 3.5 Features and include the (.NET 2.0 and 3.0) by clicking on the check boxes and click –>Next–>Install

* The installation will take around 1 – 3 minutes.

Step 4

After installing the .NET Framework 3.5, which is a requirement for SQL2014, begin installing the SQL 2014 server on the Win2k12 Server.

  • Mount the SQLServer2014SP2-Full-x64-ENU.iso and click on “Setup” application.

Install SQL2014

  • Now a popup screen appears in which you have to select the option “Installation –>New SQL server stand alone installation or add features to an existing installation”.

Click on New

  • The next popup will ask for entering product key or Evaluation i.e. free period of 180 days, select which ever is suitable for you and click Next.

product key

  • Read and accept the License terms then Click Next.
  • Once you click on Next it will check for the prerequisites of SQL 2014, verify if all the prerequisites are met and continue by clicking Next.
  • Click on Next for all the screens till Feature selection, leaving the settings to default, ignore few warnings.
  • In Feature selection, you need to select “Database Engine Services” and “Management Tools complete” then click Next as shown below.

management tools and database engine

  • The next screen is for the configurations, leave it to default which is the Feature Rules, Click on Next.
  • In the Server Configuration section, provide the Account name as your Windows login name and set to automatic. If the credentials are correct you can go to the next screen, or it will throw an error after clicking on Next.

Server config

  • Once you move on to next screen you need to do the database configurations. We need to specify authentication mode for the Database engine which is Windows authentication mode. Select on “Add current User” to add a user under specify SQL server admin section. Click Next.

DB config

  • We are almost done for the installation, now check the summary in Ready to install Window and Click on Install.
  • The installation of SQL Server 2014 will begin, check the progress which will take 10-15 minutes.

SQL 2014installation

  • Check for the success status after installation and Click on the close button to finish the installation.
  • Now let’s do a few changes to the SQL server instance installed on the server. Go to –>SQL server management studio–>Connect using Windows Authentication, which connects to the SQL Server 2014 installed on the server.

connect to SQL studio

  • Right click on the Server Name –>Properties from the Object Explorer, as shown below.
  • Click on Security–>Logins–>Right Click on the Windows login and select server roles. Assign roles “dbcreator” and “securityadmin” check boxes and click OK.

Set server roles

  • Assign the roles “dbcreator, securityadmin, and sysadmin” for “NT Authority\SYSTEM” i.e system account and the same for SQL server “NT SERVICE\MSSQLSERVER”.
  • This completes the SQL Server 2014 installation.

Step 5

Once the SQL 2104 Server is installed we are ready to start with the SharePoint Server 2014 installation.

There are some prerequisites for the SharePoint 2016 which need to be installed before directly installing SharePoint 2016. Follow the steps below which will make it easy for installing all the required components.

  • First mount the file. (SharePoint_server_2016_x64_dvd.iso)
  • There are different methods to install the prerequisites for SharePoint 2016 which may be running the ready prerequisites installer or the offline method. Normally I would prefer using the offline method because using the ready prerequisites installer will not always work fine and also not reliable. So, let’s follow the Offline method of installing the prerequisites.
  • Before proceeding with the Offline SharePoint 2016 installer, let’s try running the installer that is available with the SharePoint_server_2016_x64_dvd.iso mounted file. The prerequisiteinstaller looks like as shown in the image below.

mounted files sharepointserver2016

  • Run the installer prerequisiteinstaller application which will try to install all of the necessary components required for SharePoint 2016. The prerequisite components include the components below:

• Application Server Role, Web Server (IIS) Role
• Microsoft SQL Server 2012 Native Client
• Microsoft ODBC Driver 11 for SQL Server
• Microsoft Sync Framework Runtime v1.0 SP1 (x64)
• Windows Server AppFabric
• Microsoft Identity Extensions
• Microsoft Information Protection and Control Client 2.1
• Microsoft WCF Data Services 5.6
• Microsoft .NET Framework 4.6
• Cumulative Update Package 7 for Microsoft AppFabric 1.1 for Windows Server (KB3092423)
• Visual C++ Redistributable Package for Visual Studio 2012
• Visual C++ Redistributable Package for Visual Studio 2015

  • Click on Next in the installer and accept the License terms then proceed. Normally this fails to get installed and the error which appears as given below:

prereq error

  • Now, do not worry if this prerequisiteinstaller throws an error. Let’s start with the Offline method for installing SharePoint 2016 prerequisites.
  • There is a slight difference in the prerequisites in SharePoint 2016 when compared to SharePoint 2013.
  • let’s first download all of the prerequisites which are listed above in Step 5 and as shown in the above diagram.
  • First, let’s run the setup file from the mounted location of SharePoint 2016.iso.
  • This step will help us understand the prerequisites required to continue the installation of SharePoint 2016. The image below will give us a clear picture of what prerequisites are still required by the installer.

Setup_Run_for_prereq

  • There are 6 components which need to be installed as a prerequisite before triggering the actual SharePoint 2016 installation process. Now, let’s download the required components using a PowerShell script, which will directly download from the Microsoft trusted site.
  • The Script can be downloaded from the link below. Save the file as .ps1 file. (powershell executable file)

Download-SP2016PreReqFiles

Open PowerShell as administrator and run the command as shown in the figure below:

Run the PoweerShell script from the location where the file is saved. Ex: ” Desktop>.\Download-SP2016PreReqFiles.ps1 ”

runscript

 

Note: Before running the script, create a folder inside C:\ or any desired folder where you want to download the prerequisites file, once the download is completed you will see all of the files in the folder you have given. (here C:\Pre is the folder name)

download complete_prereq

The above picture shows successful completion of the prerequisites and all components downloaded into the given folder.

Note: The script does not include 2 components, they are: Microsoft WCF Data Services 5.6 and Cumulative Update Package 7 for Microsoft AppFabric1.1 Windows Server (KB3092423), which can be downloaded from the trusted Microsoft locations https://www.microsoft.com/en-in/download/details.aspx?id=39373 and https://www.microsoft.com/en-us/download/details.aspx?id=49171 respectively.

  • Download and keep all of the components in a single folder and let’s begin with the installation of the components individually.
  • Except for Windows AppFabric and its patch, all the components can be manually installed by double tapping on those applications. To install Windows AppFabric and patch, we need to run a command which will be covered below.
  • Let’s begin first by installing the MicrosoftIdentityExtensions-64 as shown below.

Microsoft identity applications

  • After Microsoft Identity Extensions installation, install the Microsoft Sync framework Runtime.

Sync framework runtime install

  • Now install the third component i.e MSIPC (Active Directory Rights Management Services)

MSIPC install

  • Reboot the Win2k12 R2 server after all of the components are installed.
  • Let’s install the forth component i.e Windows Server AppFabric using the PowerShell script as given below.
.\WindowsServerAppFabricSetup_x64.exe /i CacheClient","CachingService","CachingAdmin /gac

windowsserverAppfabricAfter installation, restart the Win2k12 R2 server.

  • After restarting the server, install the AppFabric update7 patch by double clicking on the application as shown below. Before installing the AppFabric application –>Right click–>Unblock the file. This is a mandatory step which needs to be done before installing a file which is downloaded from outside.

Appfabric patch update7

  • Now again restart the Win2k12 R2 server.
  • After reboot now install WCF data services by double clicking on the application directly. Before installing the WCF application –>Right click–>Unblock the file. This is a mandatory step which needs to be done before installing a file which is downloaded from outside.

WCF data services

  • After installing the components, restart the Windows2k12 R2 server and again run the Setup to confirm if all the prerequisites are met. If the screen appears as below then all the prerequisites are met.

Valid Produck Key

Note: Most of the prerequisites like sqlncli and .NET framework 4.6 will be installed when Windows update is performed, hence update is a necessary step which takes care of most of the prerequisites.

  • Enter a valid Product key and then accept the license terms. Accept and click on Continue–>Install.
  • Installation of Microsoft SharePoint Server 2016 will take 10 -15 minutes.

SharePoint2016 installation progress

  • After the installation is successful, click on Finish and a configuration wizard appears as below.

Welcome to sp config

  • Read the information and click on Next, then select Create new server farm from the next window, Click Next.
  • Specify the configuration Database settings as shown below by providing the server ip address (Database server), Database name of your choice and then username and password for database login. Then click Next.

Sharepoint config wizard

  • Once the port is configured, specify a server role, here we use Single-Server Farm, then click Next.

Sharepoint config

  • Specify the port for the web application and configure security settings as NTLM or Kerberos for authentication and then Next.

ConfigureSP central admin web app

  • Verify the configuration and click Next.

Completing the Configuration settings

    • Click on Next after verifying and the configuration will begin which will take around 10-15 minutes to finish.

Configuring

Note: Troubleshooting: If the configuration fails with the following failure message as shown below, then while installing WCF data services and AppFabric you have not unblocked the files. Now unblock both the files and reinstall them. Restart the Win2k12 R2 server and begin the configuration again.

Config failed-troubleshooting

  • Once the configuration is successful, Click on Finish as shown below. After which a window appears in the browser which asks for authentication. Enter the username and password provided as authentication for database and Click OK.

authentication

  • After successful authentication, a Welcome screen appears as shown below.

Sharepoint site welcome screen

  • Start the Wizard and select “use existing managed account” and click Next. After clicking on next this will take a while to set it up. (10-15 mins)

Use existing account

  • Finally a Create Site Collection screen appears in which you can create your desired site. Then click OK.

Sitecreation window

  • This step successfully completes the Farm configuration. SharePoint 2016 installation and configuration is completed click Finish.

Finish

This is the final step for the SharePoint 2016 server setup. These steps are tested on a virtual machine environment with 100% success rate more than 10 times. So there is a maximum chance for the above provided steps to work on your environment. Also some of the troubleshooting steps are mentioned in the article which will help you to handle the problem.

Feel free to post any comments on this or if you get stuck between any steps.

You can take the following MVA courses if you are stocked at any time.

https://mva.microsoft.com/en-US/training-courses/initial-implementation-of-sharepoint-server-10342?l=zofRht16_505095253

https://mva.microsoft.com/en-US/training-courses/developing-sharepoint-server-core-solutions-jump-start-8262?l=bSwfjnKy_8204984382

https://mva.microsoft.com/en-US/training-courses/developing-sharepoint-server-advanced-solutions-jump-start-8238?l=D2NU8mJy_9804984382

https://mva.microsoft.com/en-US/training-courses/plan-and-configure-user-access-for-sharepoint-2013-11323?l=7XG3wN5CB_9105095253

https://mva.microsoft.com/en-US/training-courses/deep-dive-building-blocks-and-services-of-sharepoint-8933?l=H1H3ZFC3_2704984382

Reference links:

https://technet.microsoft.com/en-us/library/cc262485%28v=office.16%29.aspx#section4

https://technet.microsoft.com/en-IN/library/cc262957.aspx

For High Availability you may consider looking at the below picture for system reference

https://www.sharepointeurope.com/media/387321/a_high_availability_architecture_550x343.jpg

This poster describes the SharePoint Online, Microsoft Azure, and SharePoint on-premises configurations that business decision makers and solutions architects need to know about.

Item Description
SharePoint Online, Azure, and SharePoint on-prem configurations

PDF file PDF  |  Visio file Visio

This poster describes four architectural models:

  • SharePoint Online (SaaS) – Consume SharePoint through a Software as a Service (SaaS) subscription model.
  • SharePoint Hybrid – Move your SharePoint sites and apps to the cloud at your own pace.
  • SharePoint in Azure (IaaS) – You extend your on-premises environment into Microsoft Azure and deploy SharePoint 2016 Servers there. (This is recommended for High Availability/Disaster Recovery and dev/test environments.)
  • SharePoint On-premises – You plan, deploy, maintain and customize your SharePoint environment in a datacenter that you maintain.

This poster shows the recommended MinRole topologies in a SharePoint on-premises environment.

Item Description
Click to view and download this poster about SharePoint Server MinRole topologies.

PDF file PDF  |  Visio file Visio

This poster shows the different recommended MinRole topologies that can be deployed in a SharePoint Server 2016 environment. It also shows the associated services that are provisioned with each role type.

This poster shows the databases that support SharePoint Server 2016.

Item Description
This is a thumbnail fo the SharePoint Server 2016 databases poster.

PDF file PDF  |  Visio file Visio

This poster is a quick reference guide to the databases that support SharePoint Server 2016. Each database has the following details:

  • Size
  • Scaling guidance
  • I/O patterns
  • Requirements

The first page contains the SharePoint system databases and the service applications that have multiple databases.

The second page shows all of the service applications that have single databases.

For more information about the SharePoint Server 2016 databases, see Database types and descriptions in SharePoint Server 2016

These posters describe search architectures in SharePoint Server 2016.

Item Description
Search Architectures for SharePoint Server 2016

Poster with an overview of the search components and search databases, how they interact, and an example of a search architecture built of these components and databases.

PDF file PDF  |  Visio file Visio

This poster gives an overview of the search architecture in SharePoint Server 2016. It describes the search components and databases in the search architecture and how these interact. It also shows an example of a medium-sized search farm.
Enterprise Search Architectures for SharePoint Server 2016

Poster describing the search components and databases, three model architectures for enterprise search, hardware requirements and scaling considerations.

PDF file PDF  |  Visio file Visio

This poster gives an overview of enterprise search architecture in SharePoint Server 2016. It shows sample search architectures for small, medium, and large-sized enterprise search farms. It also gives scaling considerations and hardware requirements.
Internet Sites Search Architectures for SharePoint Server 2016

Poster describing the search components and databases, a model architecture for Internet sites search, hardware requirements, scaling considerations, and performance considerations.

PDF file PDF  |  Visio file Visio

This poster gives an overview of the search architecture for Internet sites in SharePoint Server 2016. It shows a sample search architecture for a medium-sized search farm. It also gives performance considerations and hardware requirements.

Install Windows 10 IoT Core for the Raspberry Pi

Disclaimer: This is not my original work, just a collective effort for all IoT learners support. The credit for the original writer has been included at the end of the post.

In this tutorial, I will be going through the process of installing and setting up Windows 10 IoT Core for the Raspberry Pi.

For those who don’t know Windows 10 IoT Core is a version of the Windows 10 operating system built just for IoT devices such as the Raspberry Pi. This is very useful if you plan on utilizing something like UWP to write your application, it also gives you access to Windows 10’s core, and its wide variety of features.

I very briefly go into coding and pushing applications to the device. If you need to learn more about how to do things, then I highly recommend looking at some of Microsoft’s documentation as it is very thorough.

Please note to complete this tutorial you will need either a Raspberry Pi 2 or a Raspberry Pi 3. This is unsupported on other versions of the Raspberry Pi.

Take a look at this Video Tutorial. https://www.youtube.com/watch?v=YSVofU4Hu5o

Equipment

To be able to install Windows 10 IoT on the Raspberry Pi correctly you will need the following pieces of equipment.

Recommended:

Raspberry Pi 2 or 3

Micro SD Card

Ethernet Cord

Optional:

Raspberry Pi Case

USB Keyboard

USB Mouse
You will also need a computer running Windows 10 to be able to complete the following process.

Installing Windows 10 IoT on your Raspberry Pi

1. To begin, we will first need to download and install the Windows 10 IoT Core Dashboard. To download this, we just need to go to the Windows 10 IoT website here.
This piece of software is what will download the correct system for our Raspberry Pi and format it.

2. Insert your SD card into the computer or laptop’s SD card reader and check the drive letter allocated to it, e.g. G:/. You will need to know this to ensure that you are formatting the correct drive, as you don’t want to be doing this to any important data.

3. Now that you have inserted your SD Card into your computer/laptop, we will need to run the “Windows 10 IoT Core Dashboard” software. If you can’t find this easily after installing it then try running a search.

With the software loaded up we need to go into the “Set up a new device” (1.) screen as shown below.

On here you will want to set your “Device name” and set the “New Administration password“. Make sure that you set the password to something you can remember easily, but is secure, as this password is what you will use to remotely connect to your Raspberry Pi (2.).

Before we continue, make sure that “Drive” is set to the correct drive, make sure that the drive letter is the same as the SD Card that your inserted in step 2.

When you have filled in your information tick the “I accept the software licence terms” and then press the “Download and install” button (3.).

Windows 10 IoT Dashboard Setup a new device

4. Once the software has finished downloading and installing Windows 10 IoT Core for the Raspberry Pi we can proceed on with this tutorial. Now safely to out your Micro SD card from your computer so you can put it into your Raspberry Pi.

Booting and setting up your Win 10 IoT device

1. Now that we have successfully downloaded and written the image to our Raspberry Pi’s Micro SD card we can insert the SD Card back into the Raspberry Pi.

2. Before we power back on the device, make sure that you plug in a HDMI cable and a mouse and keyboard, we will need all 3 of these if you intend on utilizing Wi-Fi on your Raspberry Pi Windows 10 IoT device.

Once done you can plug your Raspberry Pi back into power and allow it to start booting up.

3. Now is the long wait for your Raspberry Pi to start up, when I did this it took a fair while for the Raspberry Pi to start up on boot, don’t be afraid if you think it may have frozen it just takes some serious time to do the initial setup and startup.

4. Once it has finished starting up, you should be greeted with a screen like below. Now to setup a WiFi connection, we need to click the cog in the top right-hand corner.

Windows 10 IoT on the Raspberry Pi

5. Now in the next menu we need to go to “WiFi and Network” and select the WiFi access point you want to connect to, you will receive a prompt asking you to enter your network password.

Once you have connected to your WiFi network you can return to the main screen to grab your Raspberry Pi’s IP Address, as we will need this further along in the tutorial.

Raspberry Pi Windows 10 IoT Set WiFi

Connecting to Your Device

Now there are 3 ways you’re able to connect to your Raspberry Pi Windows 10 IoT device. I will quickly mention each method now.

Web Browser

First off is utilizing your web browser to talk with the Raspberry Pi, it is probably the easiest out of the 3 main ways to deal with. Basically, all you simply need to do is point your Web Browser to your Raspberry Pi’s IP Address on port 8080.

For example, my Raspberry Pi’s local IP address is 192.168.0.143, so in my favorite web browser i would type in http://192.168.0.43:8080

You can also use the “Windows 10 IoT Core Dashboard” tool to be able to click to get to the devices web page as well. Simply load up the application, go to the “My Devices” (1.) tab in the left sidebar, right click (2.) on the device you want to connect to and click “Open in Device Portal” (3.).

Windows 10 IoT My Devices Screen

Upon either going to your Raspberry Pi’s IP Address or using the Windows 10 IoT Core Dashboard tool you will be first asked to login. Make sure you use administrator as the username, and the password you set at the beginning of this tutorial as the password.

Upon successfully logging in you should be greeted with the screen below. We recommend exploring around as the web tool does offer a fair bit of access and incite to your device. You can debug and see real time performance through this interface which is incredibly helpful to see what you Raspberry Pi is doing.

Raspberry Pi Windows 10 IoT Website

PowerShell

PowerShell is not a tool that many will be too familiar with, but it is Microsoft’s more advanced version of command prompt giving you access to a wealth of tools including the ability to administer remote systems, a feature we will be making use of shortly.

PowerShell makes it rather simple to interact with your Raspberry Pi Windows 10 IoT device as we will show shortly. There is two ways of connecting to your device through PowerShell. The easier way relies on the “Windows 10 IoT Core Dashboard” tool (Steps 1a+), the other way is utilizing PowerShell to do everything (Steps 1b+).

1a. First off, we will explain the simple way, first load up the “Windows 10 IoT Core Dashboard” tool. With the application open, go to the “My Devices” (1.) tab in the sidebar, right click (2.) on the device you want to connect to and click “Launch PowerShell” (3.).

Raspberry Pi Windows 10 IoT Dashboard Launch Powershell

2a. This will launch a PowerShell session that will automatically begin to connect to your Raspberry Pi. When prompted enter the password we set at the start of this tutorial. You should be greeted with a PowerShell window like shown below when you have been successfully connected.

1b. The second way of connecting to your Raspberry Pi is slightly more complicated and utilizes PowerShell completely. To open PowerShell on Windows 10, right click the windows Icon and select “Windows Powershell (Admin)“.

2b. In here we want to type in the following command, this adds our Raspberry Pi as a trusted device for PowerShell to connect to. Make sure you replace [YOUR _PI_IP_ADDRESS] with your Raspberry Pi’s local IP address.

Set-Item WSMan:\localhost\Client\TrustedHosts -Value [YOUR _PI_IP_ADDRESS]

3b. With that done, we can now start a PowerShell session with our Raspberry Pi Windows 10 IoT device. To do this enter the command below into PowerShell, making sure you replace [YOUR _PI_IP_ADDRESS] with your Raspberry Pi’s local IP address.

Enter-PSSession -ComputerName [YOUR _PI_IP_ADDRESS] -Credential [YOUR _PI_IP_ADDRESS]\Administrator

4b. You will be asked to enter the password you set earlier in this tutorial. Enter that to continue.

5b. After about 30 seconds, PowerShell should have now successfully made the connection and you should see a screen like below.

Raspberry Pi Windows 10 IoT Core Powershell connection

SSH

The third main way of interacting with your Raspberry Pi Windows 10 IoT device is to utilize SSH. The main advantage of this is that it is a widely available protocol and is something most users of the Raspberry Pi will be thoroughly familiar with.

You can also follow the SSH instructions below in order to use SSH to connect to your device.

1. To start off make sure you have a SSH client installed, on Windows I highly recommend using either Putty or MobaXterm.

2. Now in your SSH Client connect to your Raspberry Pi’s IP Address on port 22 (The default SSH port).

3. When asked to enter the username you want to login with, make sure you use administrator, as this is the default login username for Windows 10 IoT Core.

4. You will now be asked to enter the password associated with the account, the password you want to use is the one you would of set within the Windows 10 IoT Core Dashboard at the start of this tutorial.

5. You should now be successfully logged into your Raspberry Pi Windows 10 IoT Core device and should be greeted with a screen like what is shown below.

Raspberry Pi Windows 10 IoT SSH

If you want to learn more about utilizing SSH and some of the commands you can use within the session then make sure you take a look at Micrsoft’s own IoT documentation here.

Setting up Visual Studio for Windows 10 IOT Core

Lastly you are most likely going to want to setup Visual Studio Community. The reason for this is so that you are able to start developing your own applications for Windows 10 IoT Core.

Installation

1. First, we must download and install Visual Studio Community, luckily this is easily available on Microsoft’s website, you can find Visual Studio Community by going to the visual studio community.

Be warned that the download and installation of Visual Studio Community can take some time especially on slow internet connections.

2. Once the installation process has completed you can continue this with tutorial. Start by launching up Visual Studio Community. It will ask you to do some configuration, it should be fine to just use the default settings.

3. You will notice that there isn’t any IoT templates in the default installation. Click on the here link next to install Windows 10 IOT core project templates back on the page linked back.

4. Now one of the things you will find that is currently missing is any project templates for Windows 10 IoT Core. We can grab and install these by going to the Visual Studio marketplace.

5. Once you have downloaded and installed the templates, close and re-open Visual Studio, you need to do this for Visual Studio to load them in.

6. Upon creating the new project, you will be prompted to activate developer mode on your Windows 10 device. Simply follow the prompts provided to activate it.

7. Everything should now be ready for you to code your new application. You can find documentation on certain features of Windows 10 IoT Core by going to their documentation page. You can also find a document that explains how to utilize the GPIO pins from within Windows 10 IoT by going to their GPIO documentation.

Pushing code to the device

1. Once you have your new application in a state in which you want to deploy it to your Raspberry Pi Windows 10 IoT device, go up to the tab that has a green arrow in it.

2. Click the black drop down arrow, and select remote machine.

3. Now in here you should be able to select your Raspberry Pi underneath automatic configuration, however in some cases this will not function correctly and you will have to manually enter the IP of your Raspberry Pi.

4. You should now be able to push code / applications to your Windows 10 IoT Raspberry Pi.

I hope you now have learn how to install Windows 10 IoT core for the Raspberry Pi. If I have missed anything, or if you are having troubles or anything else you would like to share then be sure to drop a comment below.

All credit goes to PyMyLifeUp

Windows 10 IoTCore

Introduction

This project’s goal is to demonstrate guidelines for creating a Windows 10 IoTCore based product and walk through the creation of an IoT device, from implementation to final deployment.

The project has two applications:

  • One background application to receive sensor data and send it to the Azure cloud. Receiving sensor data and analyzing it are important tasks in IoT and a device will often operate in “headless” mode for monitoring; thus, we separate these tasks in an independent app. It also receives application keys securely and saves user settings to Azure.
  • One foreground application for user interaction. This application shows local weather (read by the background app), information from the internet (news and regional weather) and interacts with the user (playing media or showing a slideshow). A settings page is also available to change settings.

App communication

The applications are written using Universal Windows Platform (UWP); thus, the same foreground app can be run on both IoT and Desktop.

Guides

Steps from implementation of apps to deployment are documented with an end-to-end solution. Each tutorial shows small code snippets and then links to the code running in the walkthrough project.

Sections

  1. About the project
  2. Background application
  3. Foreground application
  4. Inter-application communication
  5. Connecting to the Azure cloud
  6. Integration with third-party services
  7. Preparing for deployment
  8. Deployment
    • Creating a retail OEM image

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Removing the Office 2013 version of Office 365 ProPlus

Office 365 Admins and users must know that from March 1, 2017 Office 2013 installation from your Office 365 tenant will not be available. Office 2016 is the recommended version of Office 365 ProPlus and includes all the latest upgrades and new features. As Microsoft Office team announced in September 2015, when they released Office 2016, beginning March 1, 2017, the Office 2013 version of Office 365 ProPlus will no longer be available for installation from the Office 365 portal.
How does this affect me?
Beginning March 1, 2017, your users will no longer see Office 2013 as an option for download through the Office 365 portal, and admins will no longer have the option under Software download settings in the admin portal to choose to enable Office 2013.
In addition, Office 365 team will no longer provide feature updates for this version, nor provide support.
What do I need to do to prepare for this change?
We recommend you install Office 2016 as soon as possible to have the latest and greatest features and support. Please click Additional Information to learn more.