Join any G-TechEd Module and get a Calling (SIM) Tab along with your study material & G-Dream Service. For more information about G-TechEd Module Write us : info@gtechnosoft.in

Saturday, March 29, 2014

VDI and Its Key Implementation Considerations


Virtual Desktop Infrastructure (VDI)



Virtual Desktop Infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server. VDI is a variation on the client/server computing model, sometimes referred to as server-based computing

In the past couple of years, some large organizations have turned to VDI as an alternative to the server-based computing model used by Citrix and Microsoft Terminal Services.

Virtual Desktop Infrastructure (VDI) is a desktop-centric service that hosts user desktop environments on remote servers and/or blade PCs, which are accessed over a network using a remote display protocol. A connection brokering service is used to connect users to their assigned desktop sessions. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same desktop environment with their applications and data.[3] For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to respond more quickly to the changing needs of the user and business.


Key Implementation Considerations 


There are plenty of considerations regarding hosting Virtual Desktop Infrastructure (VDI) in the enterprise. Following are three of the most significant: 


Licensing :


Organizations underestimate the impact of Windows client licenses in a VDI environment. Microsoft licensing can be complex and VDI licensing can be even more complicated. Generally speaking, Microsoft charges for each device connecting to VDI, the license for the VDI instance and, if the endpoint runs Windows, it requires a traditional license as well. This is an area that needs to be well researched prior to undertaking a VDI implementation. 



Desktop Management :


Hardware costs are not the only consideration for VDI clients. Many organizations are tempted to use traditional desktops to access the VDI infrastructure, especially as the price difference between fully loaded desktops and thin clients continues to narrow. The cost and convenience should be weighed against the management cost. For every traditional desktop that is leveraged for VDI access, you essentially double your management overhead for each user. It's not just the VDI instances that must be maintained; physical clients still need software and security updates.



Storage :


Most virtualization engineers are aware that physical memory is the normal bottleneck for virtualized environments. Storage I/O is more of a concern for VDI environments. The use patterns of end-user workstations are very different from those of servers'. As a result, storage I/O can be the biggest performance headache for system optimization. To that end, hyper-converged platforms -- where the hypervisor itself is built in -- are quickly becoming the preferred strategy for VDI due to performance, cost and reliability. 




































Sunday, March 23, 2014

Overview - Windows Server 2012 R2 Hyper-V Network Virtualization





Hyper-V Network Virtualization


In Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager, Microsoft provides an end-to-end network virtualization solution. There are four major components that comprise Microsoft’s network virtualization solution:

  • Windows Azure Pack for Windows Server provides a tenant facing portal to create virtual networks.
  • System Center 2012 R2 Virtual Machine Manager (VMM) provides centralized management of the virtual networks.
  • Hyper-V Network Virtualization provides the infrastructure needed to virtualize network traffic.
  • Hyper-V Network Virtualization gateways provides connections between virtual and physical networks.


This topic introduces concepts and explains the key benefits and capabilities of Hyper-V Network Virtualization (one part of the overall network virtualization solution) in Windows Server 2012 R2. It explains how network virtualization benefits both private clouds looking for enterprise workload consolidation and public cloud service providers of Infrastructure as a Service (IaaS).

Feature description


Hyper-V Network Virtualization provides “virtual networks” (called a VM network) to virtual machines similar to how server virtualization (hypervisor) provides “virtual machines” to the operating system. Network virtualization decouples virtual networks from the physical network infrastructure and removes the constraints of VLAN and hierarchical IP address assignment from virtual machine provisioning. This flexibility makes it easy for customers to move to IaaS clouds and efficient for hosters and datacenter administrators to manage their infrastructure, while maintaining the necessary multi-tenant isolation, security requirements, and supporting overlapping Virtual Machine IP addresses.


Customers want to seamlessly extend their datacenters to the cloud. Today there are technical challenges in making such seamless hybrid cloud architectures. One of the biggest hurdles customers face is reusing their existing network topologies (subnets, IP addresses, network services, and so on.) in the cloud and bridging between their on-premise resources and their cloud resources. Hyper-V Network Virtualization provides the concept of a VM Network that is independent of the underlying physical network. With this concept of a VM Network, composed of one or more Virtual Subnets, the exact location in the physical network of virtual machines attached to a virtual network is decoupled from the virtual network topology. As a result, customers can easily move their virtual subnets to the cloud while preserving their existing IP addresses and topology in the cloud so that existing services continue to work unaware of the physical location of the subnets. That is, Hyper-V Network Virtualization enables a seamless hybrid cloud.


In addition to hybrid cloud, many organizations are consolidating their datacenters and creating private clouds to internally get the efficiency and scalability benefit of cloud architectures. Hyper-V Network Virtualization allows better flexibility and efficiency for private clouds by decoupling a business unit’s network topology (by making it virtual) from the actual physical network topology. In this way, the business units can easily share an internal private cloud while being isolated from each other and continue to keep existing network topologies. The datacenter operations team has flexibility to deploy and dynamically move workloads anywhere in the datacenter without server interruptions providing better operational efficiencies and an overall more effective datacenter.


For workload owners, the key benefit is that they can now move their workload “topologies” to the cloud without changing their IP addresses or re-writing their applications. For example, the typical three-tier LOB application is composed of a front end tier, a business logic tier, and a database tier. Through policy, Hyper-V Network Virtualization allows customer onboarding all or parts of the three tiers to the cloud, while keeping the routing topology and the IP addresses of the services (i.e. virtual machine IP addresses), without requiring the applications to be changed.


For infrastructure owners, the additional flexibility in virtual machine placement makes it possible to move workloads anywhere in the datacenters without changing the virtual machines or reconfiguring the networks. For example Hyper-V Network Virtualization enables cross subnet live migration so that a virtual machine can live migrate anywhere in the datacenter without a service disruption. Previously live migration was limited to the same subnet restricting where virtual machines could be located. Cross subnet live migration allows administrators to consolidate workloads based on dynamic resource requirements, energy efficiency, and can also accommodate infrastructure maintenance without disrupting customer workload up time.


Real Life and Windows Server 2012 R2 Hyper-V Network Virtualization


With the success of virtualized datacenters, IT organizations and hosting providers (providers who offer colocation or physical server rentals) have begun offering more flexible virtualized infrastructures that make it easier to offer on-demand server instances to their customers. This new class of service is referred to as Infrastructure as a Service (IaaS). Windows Server 2012 R2 provides all the required platform capabilities to enable enterprise customers to build private clouds and transition to an IT as a service operational model. Windows Server 2012 R2 also enables hosters to build public clouds and offer IaaS solutions to their customers. When combined with Virtual Machine Manager to manage Hyper-V Network Virtualization policy, Microsoft provides a powerful cloud solution.


Windows Server 2012 R2 Hyper-V Network Virtualization provides policy-based, software-controlled network virtualization that reduces the management overhead faced by enterprises when they expand dedicated IaaS clouds, and it provides cloud hosters better flexibility and scalability for managing virtual machines to achieve higher resource utilization.


IaaS Cloud and Traditional VLANs


An IaaS scenario that has virtual machines from different organizational divisions (dedicated cloud) or different customers (hosted cloud) requires secure isolation. Today’s solution, virtual local area networks (VLANs), can present significant disadvantages in this scenario.


VLANs   Currently, VLANs are the mechanism that most organizations use to support address space reuse and tenant isolation. A VLAN uses explicit tagging (VLAN ID) in the Ethernet frame headers, and it relies on Ethernet switches to enforce isolation and restrict traffic to network nodes with the same VLAN ID. 

The main disadvantages with VLANs are as follows:


  • Increased risk of an inadvertent outage due to cumbersome reconfiguration of production switches whenever virtual machines or isolation boundaries move in the dynamic datacenter.
  • Limited in scalability because there is a maximum of 4094 VLANs and typical switches support no more than 1000 VLAN IDs.
  • Constrained within a single IP subnet, which limits the number of nodes within a single VLAN and restricts the placement of virtual machines based on physical locations. Even though VLANs can be expanded across sites, the entire VLAN must be on the same subnet.

IP address assignment   In addition to the disadvantages that are presented by VLANs, virtual machine IP address assignment presents issues, which include:

  • Physical locations in datacenter network infrastructure determine virtual machine IP addresses. As a result, moving to the cloud typically requires changing IP addresses of the service workloads.
  • Policies are tied to IP addresses, such as firewall rules, resource discovery and directory services, and so on. Changing IP addresses requires updating all the associated policies.
  • Virtual machine deployment and traffic isolation are dependent on the topology.


When datacenter network administrators plan the physical layout of the datacenter, they must make decisions about where subnets will be physically placed and routed. These decisions are based on IP and Ethernet technology that influence the potential IP addresses that are allowed for virtual machines running on a given server or a blade that is connected to a particular rack in the datacenter. When a virtual machine is provisioned and placed in the datacenter, it must adhere to these choices and restrictions regarding the IP address. Therefore, the typical result is that the datacenter administrators assign new IP addresses to the virtual machines.


The problem with this requirement is that in addition to being an address, there is semantic information associated with an IP address. For instance, one subnet may contain given services or be in a distinct physical location. Firewall rules, access control policies, and IPsec security associations are commonly associated with IP addresses. Changing IP addresses forces the virtual machine owners to adjust all their policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services to the cloud, leaving legacy applications alone.


Hyper-V Network Virtualization decouples virtual networks for customer virtual machines from the physical network infrastructure. As a result, it enables customer virtual machines to maintain their original IP addresses, while allowing datacenter administrators to provision customer virtual machines anywhere in the datacenter without reconfiguring physical IP addresses or VLAN IDs.


Functionality, benefits, and capabilities of Hyper-V Network Virtualization in Windows Server 2012 R2 :



Enables flexible workload placement – Network isolation and IP address re-use without VLANs 


Hyper-V Network Virtualization decouples the customer’s virtual networks from the physical network infrastructure of the hosters, providing freedom for workload placements inside the datacenters. Virtual machine workload placement is no longer limited by the IP address assignment or VLAN isolation requirements of the physical network because it is enforced within Hyper-V hosts based on software-defined, multitenant virtualization policies.


Virtual machines from different customers with overlapping IP addresses can now be deployed on the same host server without requiring cumbersome VLAN configuration or violating the IP address hierarchy. This can streamline the migration of customer workloads into shared IaaS hosting providers, allowing customers to move those workloads without modification, which includes leaving the virtual machine IP addresses unchanged. For the hosting provider, supporting numerous customers who want to extend their existing network address space to the shared IaaS datacenter is a complex exercise of configuring and maintaining isolated VLANs for each customer to ensure the coexistence of potentially overlapping address spaces. With Hyper-V Network Virtualization, supporting overlapping addresses is made easier and requires less network reconfiguration by the hosting provider.


In addition, physical infrastructure maintenance and upgrades can be done without causing a down time of customer workloads. With Hyper-V Network Virtualization, virtual machines on a specific host, rack, subnet, VLAN, or entire cluster can be migrated without requiring a physical IP address change or major reconfiguration.


Enables easier moves for workloads to a shared IaaS cloud 


With Hyper-V Network Virtualization, IP addresses and virtual machine configurations remain unchanged. This enables IT organizations to more easily move workloads from their datacenters to a shared IaaS hosting provider with minimal reconfiguration of the workload or their infrastructure tools and policies. In cases where there is connectivity between two datacenters, IT administrators can continue to use their tools without reconfiguring them.


Enables live migration across subnets 


Live migration of virtual machine workloads traditionally has been limited to the same IP subnet or VLAN because crossing subnets required the virtual machine’s guest operating system to change its IP address. This address change breaks existing communication and disrupts the services running on the virtual machine. With Hyper-V Network Virtualization, workloads can be live migrated from servers running Windows Server 2012 in one subnet to servers running Windows Server 2012 in a different subnet without changing the workload IP addresses. Hyper-V Network Virtualization ensures that virtual machine location changes due to live migration are updated and synchronized among hosts that have ongoing communication with the migrated virtual machine.


Enables easier management of decoupled server and network administration 


Server workload placement is simplified because migration and placement of workloads are independent of the underlying physical network configurations. Server administrators can focus on managing services and servers, and network administrators can focus on overall network infrastructure and traffic management. This enables datacenter server administrators to deploy and migrate virtual machines without changing the IP addresses of the virtual machines. There is reduced overhead because Hyper-V Network Virtualization allows virtual machine placement to occur independently of network topology, reducing the need for network administrators to be involved with placements that might change the isolation boundaries.



Simplifies the network and improves server/network resource utilization 


The rigidity of VLANs and the dependency of virtual machine placement on a physical network infrastructure results in overprovisioning and underutilization. By breaking the dependency, the increased flexibility of virtual machine workload placement can simplify the network management and improve server and network resource utilization. Note that Hyper-V Network Virtualization supports VLANs in the context of the physical datacenter. For example, a datacenter may want all Hyper-V Network Virtualization traffic to be on a specific VLAN.



Compatible with existing infrastructure and emerging technology 


Hyper-V Network Virtualization can be deployed in today’s datacenter, yet it is compatible with emerging datacenter “flat network” technologies.


Provides for interoperability and ecosystem readiness 


Hyper-V Network Virtualization supports multiple configurations for communication with existing resources, such as cross premise connectivity, storage area network (SAN), non-virtualized resource access, and so on. Microsoft is committed to working with ecosystem partners to support and enhance the experience of Hyper-V Network Virtualization in terms of performance, scalability, and manageability.


Uses Windows PowerShell and WMI 


Hyper-V Network Virtualization supports Windows PowerShell and Windows Management Instrumentation (WMI) for configuring the network virtualization and isolation policies. The Windows PowerShell cmdlets for Hyper-V Network Virtualization enable administrators to build command-line tools or automated scripts to configure, monitor, and troubleshoot network isolation policies.





Saturday, March 22, 2014

What is UTM ???

UTM (Unified Threat Management)


Unified Threat Management (UTM) is a solution in the network security industry, and since 2004 it has gained currency as a primary network gateway defense solution for organizations.In theory, UTM is the evolution of the traditional firewall into an all-inclusive security product able to perform multiple security functions within one single appliance: network firewall, network intrusion prevention and gateway antivirus (AV), gateway anti-spam, VPN, content filtering, load balancing, data leak prevention and on-appliance reporting.

The worldwide UTM market was approximately worth $1.2 billion in 2007, with a forecast of 35-40% compounded annual growth rate through 2011. The primary market of UTM providers is the SMB and enterprise segments, although a few providers are now providing UTM solutions for small offices/remote offices.

The term UTM was originally coined by market research firm IDC. The advantages of unified security lie in the fact that rather than administering multiple systems that individually handle antivirus, content filtering, intrusion prevention and spam filtering functions, organizations now have the flexibility to deploy a single UTM appliance that takes over all their functionality into a single rack mountable network appliance.

Unified threat management (UTM) refers to a comprehensive security product that includes protection against multiple threats. A UTM product typically includes a firewall, antivirus software, content filtering and a spam filter in a single integrated package. The term was originally coined by IDC, a provider of market data, analytics and related services. UTM vendors include Fortinet, LokTek, Secure Computing Corporation and Symantec.

The principal advantages of UTM are simplicity, streamlined installation and use, and the ability to update all the security functions or programs concurrently. As the nature and diversity of Internet threats evolves and grows more complex, UTM products can be tailored to keep up with them all. This eliminates the need for systems administrators to maintain multiple security programs over time.

Utility of UTM


A single UTM appliance simplifies management of a company's security strategy, with just one device taking the place of multiple layers of hardware and software. Also from one single centralized console, all the security solutions can be monitored and configured.

In this context, UTMs represent all-in-one security appliances that carry a variety of security capabilities including firewall, VPN, gateway anti-virus, gateway anti-spam, intrusion prevention, content filtering, bandwidth management, application control and centralized reporting as basic features. The UTM has a customized OS holding all the security features at one place, which can lead to better integration and throughput than a collection of disparate devices.

For enterprises with remote networks or distantly located offices, UTMs are a means to provide centralized security with control over their globally distributed networks.

Pros :


  • Reduced complexity: Single security solution. Single Vendor. Single AMC
  • Simplicity: Avoidance of multiple software installation and maintenance
  • Easy Management: Plug & Play Architecture, Web-based GUI for easy management
  • Reduced technical training requirements, one product to learn.
  • Regulatory compliance


Cons : 

  • Single point of failure for network traffic, unless HA is used
  • Single point of compromise if the UTM has vulnerabilities
  • Potential impact on latency and bandwidth when the UTM cannot keep up with the traffic


Some Popular UTM OEM







Wednesday, March 19, 2014

Tech News - A New Hacking Trend To Steal Your Google Account

A New Hacking Trend To Steal Your Google Account


Warning: If you receive an email with the subject "Documents," and it directs you to a webpage that looks like a Google Drive sign-in page, do not enter your information.

It's likely a new phishing scam, in which a thief creates a fake portal that asks for people's private information and then steals it. (Netflix recently faced a similar issue.)

This one uses a fake Google Drive landing page to get your Gmail address and password, cyber security company Symantec's official blog reported last Thursday. You're meant to think that the documents you'll be viewing are on Google Docs and that you need to sign in to see them. Remember, though, it's all a scam.

"We've removed the fake pages and our abuse team is working to prevent this kind of spoofing from happening again," a representative from Google tells The Huffington Post. "If you think you may have accidentally given out your account information, please reset your password."

If you were to put your Gmail address and password in the fake login, your credentials would be stolen, but you'd be taken to a real document on Google Docs, so you might not even know you'd been scammed, Symantec says.

With access to your Gmail account, scammers can make purchases on Google Play, use your Google+ account, access your Google Drive documents and more.

As always, the easiest way to protect yourself from phishing scams is to not click on unknown links and not open emails from unknown senders. Also, don't type your password anywhere that you're not 100 percent sure is real.




Tech News - WhatsApp and Android Security Flaws

WhatsApp and Android Security Flaws


WhatsApp, the mobile messaging company recently acquired by Facebook for $16 billion, said last week Thursday that reports of a security flaw in its system were “overstated”, 

Earlier this week, tech consultant and CTO at DoubleThink Bas Bosschert released a report warning that an exploit in the app’s Android encryption would enable another app to access WhatsApp chat transcripts and use them for any purpose. The key to the hack, according to Bosschert, is that WhatsApp uses a phone’s SD card to store messages, which “can be read by any Android application if the user allows it to access the SD card.”

However, WhatsApp denies that Bosschert’s methods are accurate. The company claims it’s not WhatsApp’s security problem — any user who downloads a malicious app that can access other information on the SD card is always at risk of losing information to hackers, WhatsApp’s data included.




Saturday, March 15, 2014

What is Distributed Denial of Service (DDoS) Attacks !!!


Distributed Denial of Service (DDoS) Attacks


A Distributed Denial-of-Service (DDoS) attack is one in which a multitude of compromised systems attack a single target, thereby causing denial of service for users of the targeted system. The flood of incoming messages to the target system essentially forces it to shut down, thereby denying service to the system to legitimate users.

In a typical DDoS attack, the assailant begins by exploiting a vulnerability in one computer system and making it the DDoS master. The attack master, also known as the botmaster, identifies and infects other vulnerable systems with malware. Eventually, the assailant instructs the controlled machines to launch an attack against a specified target. 

There are two types of DDoS attacks: a network-centric attack which overloads a service by using up bandwidth and an application-layer attack which overloads a service or database with application calls. The inundation of packets to the target causes a denial of service. While the media tends to focus on the target of a DDoS attack as the victim, in reality there are many victims in a DDoS attack -- the final target and as well the systems controlled by the intruder. Although the owners of co-opted computers are typically unaware that their computers have been compromised, they are nevertheless likely to suffer a degradation of service and not work well. 

A computer under the control of an intruder is known as a zombie or bot. A group of co-opted computers is known as a botnet or a zombie army. Both Kaspersky Labs and Symantec have identified botnets -- not spam, viruses, or worms -- as the biggest threat to Internet security.

Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. DoS threats are also common in business, and are sometimes responsible for website attacks.This technique has now seen extensive use in certain games, used by server owners, or disgruntled competitors on games, such as server owners' popular Minecraft servers. Increasingly, DoS attacks have also been used as a form of resistance. Richard Stallman has stated that DoS is a form of 'Internet Street Protests’. The term is generally used relating to computer networks, but is not limited to this field; for example, it is also used in reference to CPU resource management.

One common method of attack involves saturating the target machine with external communications requests, so much so that it cannot respond to legitimate traffic, or responds so slowly as to be rendered essentially unavailable. Such attacks usually lead to a server overload. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.

Denial-of-service attacks are considered violations of the Internet Architecture Board's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet service providers. They also commonly constitute violations of the laws of individual nations.

The United States Computer Emergency Readiness Team (US-CERT) defines symptoms of denial-of-service attacks to include:

  • Unusually slow network performance (opening files or accessing web sites)
  • Unavailability of a particular web site
  • Inability to access any web site
  • Dramatic increase in the number of spam emails received—(this type of DoS attack is considered an e-mail bomb)
  • Disconnection of a wireless or wired internet connection
  • Long term denial of access to the web or any internet services



In the Police and Justice Act 2006, the United Kingdom specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison.

In the US, denial-of-service attacks may be considered a federal crime under the Computer Fraud and Abuse Act with penalties that include years of imprisonment. Many other countries have similar laws.


Wednesday, March 12, 2014

Four Important Tasks Need To Complete Before Migrating to Exchange 2013 from Exchange 2010


Preparing for a migration to Exchange Server 2013 is not a small activity. There are a number of tasks you must complete before you can even begin installing the Exchange 2013. Following some of the most important tasks you'll have to complete before you can bring an Exchange Server 2013 Client Access Server into an Exchange 2010 environment.

Task 1: Install the correct Service Packs (SP)


Make sure existing Exchange Servers are running the correct service pack level before migrating to Exchange 2013. All of your Exchange 2010 servers will need to be running SP3 for Exchange.

If your Exchange Server organization contains multiple Active Directory sites, you must apply the service pack to all the Exchange servers in the Internet-facing site first. Once that's done, you can begin applying the service pack to internal sites.


After your Exchange Server 2010 machines have been upgraded to the correct service pack version, you'll need to download Cumulative Update 2 for Exchange Server 2013. This update is necessary for Exchange Server 2010 and Exchange Server 2013 to coexist. You should be able to install the cumulative update without first installing Exchange Server 2013.

Task 2: Prepare Active Directory (AD)


Next you'll need to update Active Directory before migrating to Exchange 2013. To do so, you'll need administrative rights at the forest level and at the domain level. The account you use will also need Schema Admin permissions.

It's technically possible to skip the Active Directory preparation because Exchange Server setup will detect whether AD is ready and, assuming you have the correct permissions, will automatically prepare it. However, many organizations prefer to prepare the Active Directory ahead of time. Sometimes this is done to reduce the amount of time it takes to deploy the first Exchange 2013 server; it's more often because the Exchange admin lacks the appropriate permissions to modify the Active Directory schema. Microsoft provides instructions for updating AD.

Task 3: Set up a temporary Exchange Server


Microsoft made major architectural changes to the Client Access Server role in Exchange Server 2013; the CAS is now lightweight and offers extremely limited functionality. In fact, there are really only three things an Exchange Server 2013 Client Access Server can do: it can authenticate requests, redirect requests and proxy requests. The Client Access Server does not natively perform any data processing.The reason why this is a problem is because the Mailbox Server role handles all data processing in Exchange Server 2013, including the execution of remote PowerShell cmdlets. Therefore, a lone Exchange 2013 Client Access Server is completely powerless to do anything. It totally depends on a back-end Mailbox Server to perform basic functions. 

This is why it's important to set up a temporary Exchange 2013 server on a VM. The first Exchange 2013 server you bring into an Exchange 2010 organization must contain both the Client Access Server and the Mailbox Server roles. This is obviously not a desirable configuration for organizations that want to separate these roles. So, you'll need to deploy a temporary Exchange 2013 server containing both server roles. Once the server is in place, you can bring other Exchange 2013 servers online that are running just the Client Access Server role or just the Mailbox Server role. When you're done, simply remove Exchange Server from your temporary VM.

Task 4: Certificates


The final step before installing your first Exchange 2013 server is to evaluate your certificate requirements and acquire any necessary certificates.

Depending on your namespace requirements and what types of certificates you currently use, it may be possible to reuse the certificates you already have in place. Often new certificates are required. This is especially true for organizations using something other than Subject Alternate Name certificates or wildcard certificates.

If your organization still has Exchange Server 2007 servers, you'll most likely need new certificates due to legacy namespace requirements. 






Sunday, March 2, 2014

Virtual Private Network (VPN) - At a glance

Virtual Private Network (VPN)


A virtual private network (VPN) is a network that uses a public telecommunication infrastructure, such as the Internet, to provide remote offices or individual users with secure access to their organization's network. A virtual private network can be contrasted with an expensive system of owned or leased lines that can only be used by one organization. The goal of a VPN is to provide the organization with the same capabilities, but at a much lower cost.

A VPN works by using the shared public infrastructure while maintaining privacy through security procedures and tunneling protocols such as the Layer Two Tunneling Protocol (L2TP). In effect, the protocols, by encrypting data at the sending end and decrypting it at the receiving end, send the data through a "tunnel" that cannot be "entered" by data that is not properly encrypted. An additional level of security involves encrypting not only the data, but also the originating and receiving network addresses.

A virtual private Network (VPN) extends a private network across a public network, such as the Internet. It enables a computer to send and receive data across shared or public networks as if it were directly connected to the private network, while benefiting from the functionality, security and management policies of the private network. This is done by establishing a virtual point-to-point connection through the use of dedicated connections, encryption, or a combination of the two.

A virtual private network connection across the Internet is similar to a wide area network (WAN) link between the sites. From a user perspective, the extended network resources are accessed in the same way as resources available from the private network.

VPNs allow employees to securely access their company's intranet while traveling outside the office. Similarly, VPNs securely and cost-effectively connect geographically disparate offices of an organization, creating one cohesive virtual network. VPN technology is also used by ordinary Internet users to connect to proxy servers for the purpose of protecting one's identity.

Early data networks allowed VPN-style remote connectivity through dial-up modems or through leased line connections utilizing Frame Relay and Asynchronous Transfer Mode (ATM) virtual circuits, provisioned through a network owned and operated by telecommunication carriers. These networks are not considered true VPNs because they passively secure the data being transmitted by the creation of logical data streams. They have given way to VPNs based on IP and IP/Multiprotocol Label Switching Networks (MPLS), due to significant cost-reductions and increased bandwidth provided by new technologies such as Digital Subscriber Line (DSL) and fiber-optic networks.

VPNs can be either remote-access (connecting an individual computer to a network) or site-to-site (connecting two networks together). In a corporate setting, remote-access VPNs allow employees to access their company's intranet from home or while traveling outside the office, and site-to-site VPNs allow employees in geographically disparate offices to share one cohesive virtual network. A VPN can also be used to interconnect two similar networks over a dissimilar middle network; for example, two IPv6 networks over an IPv4 network.

VPN systems classified by:

  • The protocols used to tunnel the traffic.
  • The tunnel's termination point location, e.g., on the customer edge or network-provider edge.
  • Whether they offer site-to-site or remote-access connectivity.
  • The levels of security provided.
  • The OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity.

Authentication Process:

  • Tunnel endpoints must authenticate before secure VPN tunnels can be established.
  • User-created remote-access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods.
  • Network-to-network tunnels often use passwords or digital certificates. They permanently store the key to allow the tunnel to establish automatically, without intervention from the user.

From the security standpoint, VPNs either trust the underlying delivery network, or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.