Thursday, 25 May 2017

IOS vs Android

Google's Android and Apple's iOS are operating systems used primarily in mobile technology, such as smartphones and tablets. Android, which is Linux-based and partly open source, is more PC-like than iOS, in that its interface and basic features are generally more customizable from top to bottom. However, iOS' uniform design elements are sometimes seen as being more user-friendly.

You should choose your smartphone and tablet systems carefully, as switching from iOS to Android or vice versa will require you to buy apps again in the Google Play or Apple App Store. Android is now the world’s most commonly used smartphone platform and is used by many different phone manufacturers. iOS is only used on Apple devices, such as the iPhone.

Comparison chart


Android versus iOS comparison chart
AndroidiOS

DeveloperGoogleApple Inc.
Initial releaseSeptember 23, 2008July 29, 2007
Source modelOpen sourceClosed, with open source components.
CustomizabilityA lot. Can change almost anything.Limited unless jailbroken
File transferEasier than iOS. Using USB port and Android File Transfer desktop app. Photos can be transferred via USB without apps.More difficult. Media files can be transferred using iTunes desktop app. Photos can be transferred out via USB without apps.
Available onMany phones and tablets. Major manufacturers are Samsung, Motorola, LG, HTC and Sony. Kindle Fire also uses a modified version of Android. Nexus line of devices is pure Android, others bundle manufacturer software.iPod Touch, iPhone, iPad, Apple TV (2nd and 3rd generation)
Calls and messagingGoogle Hangouts. 3rd party apps like Facebook Messenger, WhatsApp, Google Duo and Skype all work on Android and iOS both.iMessage, FaceTime (with other Apple devices only). 3rd party apps like Google Hangouts, Facebook Messenger, WhatsApp, Google Duo and Skype all work on Android and iOS both.
App store , Affordability and interfaceGoogle Play – 1,000,000+ apps. Other app stores like Amazon and Getjar also distribute Android apps. (unconfirmed ".APKs")Apple app store – 1,000,000+ apps
Video chatGoogle Hangouts and other 3rd party appsFaceTime (Apple devices only) and other 3rd party apps
OS familyLinuxOS X, UNIX
Open sourceKernel, UI, and some standard appsThe iOS kernel is not open source but is based on the open-source Darwin OS.
WidgetsYesNo, except in NotificationCenter
Internet browsingGoogle Chrome (or Android Browser on older versions; other browsers are available)Mobile Safari (Other browsers are available)
Voice commandsGoogle Now (on newer versions)Siri
MapsGoogle MapsApple Maps (Google Maps also available via a separate app download)
Available language(s)32 Languages34 Languages
Latest stable release and UpdatesAndroid 6.0.1 (Marshmallow) (October 2015)9.3 (March 21, 2016)
Alternative app stores and side loadingSeveral alternative app stores other than the official Google Play Store. It's risky to download apps from other stores because they may be malware.Apple blocks 3rd party app stores. The phone needs to be jailbroken if you want to download apps from other stores.
Battery life and managementMany Android phone manufacturers equip their devices with large batteries with a longer life.Apple batteries are generally not as big as the largest Android batteries. However, Apple is able to squeeze decent battery life via hardware/software optimizations.
Rooting, bootloaders, and jailbreakingAccess and complete control over your device is available and you can unlock the bootloader.Complete control over your device is not available.
File managerYes, available.Not available
Cloud servicesNative integration with Google cloud storage. 15GB free, $2/mo for 100GB, 1TB for $10. Apps available for Amazon Photos, OneDrive and Dropbox.Native integration with iCloud. 5GB free, 50GB for $1/mo, 200GB for $3/mo, 1TB for $10/mo. Apps available for Google Drive and Google Photos, Amazon Photos, OneDrive and Dropbox.
Photos & Videos backupApps available for automatic backup of photos and videos. Google Photos allows unlimited backup of photos if you select the low-resolution option. OneDrive, Amazon Photos and Dropbox are other alternatives.Up to 5 GB of photos and videos can be automatically back up with iCloud. All other vendors like Google, Amazon, Dropbox, Flickr and Microsoft have auto-backup apps for both iOS and Android.
SecurityAndroid software patches are available soonest to Nexus device users. Manufacturers tend to lag behind in pushing out these updates. So at any given time a vast majority of Android devices are not running updated fully patched software.Most people will never encounter a problem with malware because they don’t go outside the Play Store for apps. Apple's software updates support older iOS devices also.

Features

Since Google decided to spin its main apps out of Android, the mobile OS itself is essentially just the app launcher and the Settings screen. In contrast, iOS updates still include updates to Mail, Maps, Safari, Notes, News and all the other apps you get with the software.
As we've noted, Google gives users and app developers more flexibility in terms of editing the way the OS works (default apps, lock screens, widgets and so on) - on iOS, you're pretty much stuck with the way Apple wants to do things (which for many users is just fine).

Visually, Android's Material Design offers a more colourful, well-defined visual interface than iOS, which hasn't had a major overhaul since 2013. Apple's OS is all translucent shades and thin lines, Google's is blocky card shapes and bold headings and fonts.

Both OSes handle multitasking in similar ways and iOS has also added a back button of its own in recent times. Both have battery saving features, mobile payments support, digital assistants, and the ability to back up all of your precious data to the cloud automatically.

Native and third-party apps

As we've already said, Google's apps (Gmail, Google Maps, Google Keep and so on) are now updated independently from Android. These apps are all available on iOS too, though the versions for Google's own OS are usually slightly superior (and often updated first).
Trying to compare all of these apps against Apple's equivalents is no easy job: it's likely you've already got used to one set of apps or the other. Hangouts vs iMessage, Gmail vs Mail, Google Maps vs Apple Maps... the features are similar and there are no clear winners.

iOS has long been the winner as far as third-party app support is concerned, though the gap has closed down the years: it's now rare to find a major app or game that doesn't eventually come to both Android and iOS, even though it might launch on one or the other first.
New, experimental apps usually appear on iOS before Android: due to the fragmentation issue mentioned above, it's easier for developers to code for iOS users (and they spend more money too). Apple's platform still has the edge as far as up and coming apps go.

Google Now vs Siri


Going forward the biggest innovations in smartphone development are likely to come in the super-intelligent digital assistants: Google Now and Siri. Both give you voice-controlled access to your phone as well as smart prompts for travel and events when you need them.

Traditionally, Google Now has been more about surfacing the right info when you need it, though Apple has recently started to make Siri more proactive too. Google Now is also available on iOS in limited form, but Siri is restricted to iOS and the new Apple TV.

iOS 11 vs Android O

Google took to the stage during Google I/O 2017 to announce more features of the company’s upcoming Android O, due for release later this year. There were a range of new features announced that look to improve the user experience, so what does Apple need to offer with iOS 11 to compete?
We’ve outlined some of the key new additions to Android O, along with features that Apple should implement into iOS 11 to compete.

What’s new in Android O?

So, what does Android O bring to the table? While none are ground-breaking features, Android O brings a handful of useful changes to Google’s operating system.

Picture-in-Picture

One of the announcements made at Google I/O 2017 was the addition of picture-in-picture, a feature already used not only in Google’s YouTube app, but also in iOS. Picture-in-Picture was introduced for iPad users along with iOS 9, allowing users to minimise the video and perform other tasks while still being able to watch.
Google O’s implementation of picture-in-picture works much the same as it does on iOS; with a video playing, Android users need only tap the Home button and the video will pop into a small window that remains on-screen while you use other apps. You can slide the video around for the best placement or swipe it off-screen to end the video.

Notification Dots

While users of custom Android launchers have had access to notification badges for years, Google has never officially offered it – until now. Much like with app badges on iOS, you’ll see a small ‘dot’ that appears on top of your app icon when you receive a notification on Android O.
What is different to iOS is what you can do with it; as well as being able to tap on the app to open it and interact with the notification, you can also long-press the icon to get a short list of actions to perform. This includes the ability to view the notification without opening the app via a small on-screen pop-up that looks like the 3D Touch shortcut menu on iOS, but more notification-based.

Smart Text Selection

Basic text selection tools have been around for quite some time and have largely remained unchanged. More recent iterations of Android and iOS have introduced formatting to applicable text, but not much else. That was until Google announced Smart Text Selection, which uses Google’s AI to intelligently analyse what is being selected, and provides you with contextual shortcuts.
Say, for example, you highlight a phone number – it’ll offer you a shortcut to dial the number. Similarly, highlighting an address will provide a shortcut to begin navigation using Google Maps. It’s not just smart suggestions either, as the highlighting process will also be more intelligent, selecting full phrases and addresses instead of single words.

Auto-Fill

Android O isn’t about one headline feature, instead it’s about addressing a number of smaller issues within the operating system. One such addition is Auto-Fill. It’s not going to bring iOS users over to the dark side, but it should make Android users lives a little bit easier.
For the most used apps on your Android device, Android O will help you quickly log in. Although support needs to be manually added by developers, once supported, Android O will remember usernames and passwords to quickly log into apps on your device.

Vitals

Vitals isn’t one feature, but is instead a suite of tools that helps to improve the performance and security of your Android device. It includes Google Play Protect, which acts similarly to a virus scanner for Android apps and scans your apps for malicious content. It also includes Wise Limits, which prevents apps from running in the background for too long and helps to make your battery last longer.

Google claims that the improvements have halved the boot time of the company’s Pixel smartphone, and that apps run faster too.


Tuesday, 23 May 2017

AWS VS OpenStack?????

Let us compare the popularity of two top cloud-computing platforms:
1) the infamous Amazon Web Services, which companies typically leverage for the speed and convenience of Amazon's global, hosted, cloud-computing infrastructure, and
2) the increasingly versatile OpenStack, which allows organizations to roll their own cloud-computing services on standard hardware.

OpenStack has similarly grown in popularity since its launch in 2010, having had a nice jump in the Spring of 2013. Some of the more notable companies contributing to OpenStack include: AT&T, MD, Canonical, Cisco, Citrix, Comcast, Cray, Dell, Dreamhost, EMC, Ericsson, Fujitsu, GoDaddy, Google, HP, Hitachi, Huawei, IBM, Intel, Juniper Networks, Mirantis, Oracle, Red Hat, SUSE Linux, VMware, and Yahoo!.
While OpenStack has a lot of diverse contributors, AWS is the fifth largest web hosting provider globally.
Worldwide Market Share by Number of Clients in 2015:
  1. GoDaddy - 4.26%
  2. BlueHost - 2.56%
  3. Host Gator - 2.15%
  4. OVH.com - 1.91%
  5. Amazon Web Services - 1.81%
  6. Rackspace - 1.59%
  7. 1&1 - 1.54%
  8. Hetzner - 1.29%
  9. SoftLayer - 1.19%
  10. DreamHost - 1.01%

source: http://hostadvice.com/marketshare/ (2015)

As an open-source cloud-computing protocol, OpenStack obviously can't compete on these terms with a multi-billion dollar cloud-computing and software-as-a-service company. There are a number selling points to consider, however:
  • WalMart uses OpenStack to coordinate 100,000+ cores, this provided 100% uptime during Black Friday last year
  • Developers gave over 300 talks at the OpenStack Summit in Tokyo this October
  • Debian, Canonical, Red Hat, and SUSE Linux all support OpenStack and are active contributors
  • OpenStack has enabled companies like Disney, Bloomberg, and Wells Fargo to manage their own clouds at a fraction of the cost of proprietary solutions like AWS
  • OpenStack is the only solution that supports mixed hypervisor and bare metal server environments

I think these points lend themselves to the conclusion that adoption and further development in OpenStack are likely to keep pace.

Here is a subset of AWS services:

In the Compute realm we have...
  • Amazon Elastic Compute Cloud (EC2) scalable virtual private servers using Xen
  • Amazon Elastic MapReduce (EMR) Hadoop-based big data analytics
In Networking we have...
  • Amazon Route 53 scalable DNS
  • Amazon Virtual Private Cloud (VPC) isolated EC2 instances with the ability to extend corporate networks VPN
  • Amazon Elastic Load Balancing (ELB)
In Content Delivery we have...
  • Amazon CloudFront CDN
In Storage we have...
  • Amazon Simple Storage Service (S3)
  • Amazon Glacier low-cost, long-term storage for data archival
  • Amazon Elastic File System (EFS) to accompany EC2
In the Database realm we have...
  • Amazon DynamoDB low-latency NoSQL SSD-backed databases
  • Amazon Relational Database Service (RDS) with MySQL, Oracle, SQL Server, and PostgreSQL support
  • Amazon SimpleDB distributed database with EC2 and S3 interoperability, written in Erlang
In the Deployment realm we have...
  • AWS Elastic Beanstalk for quick deployment and cloud app management
  • AWS OpsWorks EC2 configuration services via Chef, which we discussed previously
In Management we have...
  • Amazon Identity and Access Management (IAM) to authenticate into the various services
  • AWS Directory Service for tying into an on-premises Microsoft Active Directory or for setting up a new stand-alone AWS directory
  • Amazon CloudWatch for application and resource monitoring
  • Amazon CloudHSM Hardware Security Module for data security and for meeting regulatory compliance requirements
  • AWS Key Management Service (KMS) for creating and managing encryption keys
In the Application Services realm we have...
  • Amazon DevPay (beta) for billing and account management
  • Amazon Elastic Transcoder (ETS) for mobile video transcription from S3
  • Amazon Simple Email Service (SES) for sending bulk and transactional email
  • Amazon Simple Notification Service (SNS) multi-protocol application "push" notifications
  • Amazon Cognito secure application-user data management and synchronization tool
In Analytics we have...

Here are the main components of the modular OpenStack architecture:

Compute (Nova)
  • An Infrastructure as a Service (IaaS) system
  • Management and automatation of pools of computer resources
  • Bare metal and high-performance computing (HPC) configurations
  • KVM, VMware, and Xen hypervisor virtualization
  • Hyper-V and LXC containerization
  • Python-based with various external libraries: Eventlet for concurrent programming, Kombu for AMQP communication, SQLAlchemy for database access, etc.
  • Designed to scale horizontally on standard hardware with no proprietary hardware or software requirements
  • Interoperable with legacy systems
Image Service (Glance)
  • OpenStack Image Service for discovery, registration, and delivery of services for disk and server images
  • Template-building from stored images
  • Storage and cataloging of unlimited backups
  • REST interface for querying disk image information
  • Streaming of images to servers
  • VMware integration, with vMotion Dynamic Resource Scheduling (DRS) and live migration of running virtual machines
  • All OpenStack OS images built on virtual machines
  • Maintenance of image metadata
  • Creation, deletion, sharing, and duplification of images
Object Storage (Swift)
  • Scalable redundant storage system
  • Automatic replication of content from failed disks to other active nodes
  • Suitable for inexpensive commodity hard drives and servers
Dashboard (Horizon)
  • GUI for access, provision, and automation of cloud-based resources for administrators and users
  • Third-party billing, monitoring, management tool integration
  • Customizable (brandable) dashboard
  • EC2 compatibility
Identity Service (Keystone)
  • Unified authentication system across the cloud OS
  • Integration with existing backend directory services such as LDAP
  • Various authentication methods: username/password, token-based systems, and AWS-style logins
  • Queryable, single registry of all deployed services, with programmatic determination of access for users and third-party tools
Networking (Neutron)
  • Manual and automatic management of networks and IP addresses
  • Distict networking models for different applications and user groups
  • Flat networks (VLAN's) for separating servers and traffic.
  • Static IP addresses, DHCP
  • Floating IP addresses for dynamic rerouting to resources on the network
  • Software-defined networking (SDN), OpenFlow, for multi-tenancy and scalability.
  • Management of intrusion detection systems (IDS), load balancing, firewalls, VPN's, etc.
Block Storage (Cinder)
  • Persistent block-level storage for databases and expandable file systems
  • Block storage integration into OpenStack Compute and Dashboard for allocation of storage
  • Various storage platforms supported: Ceph, CloudByte, Coraid, EMC (ScaleIO, VMAX and VNX), GlusterFS, Hitachi Data Systems, IBM Storage (Storwize family, SAN Volume Controller, XIV Storage System, and GPFS), Linux LIO, NetApp, Nexenta, Scality, SolidFire, HP (StoreVirtual and 3PAR StoreServ families) and Pure Storag
  • Snapshot management for backing up data stored on block storage volumes
  • Restoring of snapshots, use of snapshots as templates for new block storage volumes
Orchestration (Heat)
  • Orchestratation of multiple composite cloud applications using templates
  • OpenStack-native REST API
  • CloudFormation-compatible Query API
Telemetry (Ceilometer)
  • Billing system Single Point Of Contact
  • Traceable, auditable delivery of counters for billing
  • Counters extensible to new projects
  • Independent data collection
Database (Trove)
  • Database-as-a-service (DaaS) provisioning relational database engine
  • DaaS non-relational database engine
Elastic Map Reduce (Sahara)
  • Hadoop cluster provisioning
  • Setting of parameters based on: Hadoop version, cluster topology, node hardware details, etc.
  • Cluster deployment in minutes
  • Scaling of already-provisioned clusters by adding and removing worker nodes on demand
Bare Metal Provisioning (Ironic)
  • Provisioning of bare metal machines (as opposed to virtual machines)
  • Bare-metal hypervisor API
  • Plugins for interacting with bare-metal hypervisors
  • PXE and IPMI simultaneous provisioning, turning machines on and off as needed
  • Extensible with vendor-specific plugins for additional functionality
Multiple Tenant Cloud Messaging (Zaqar)
  • Multi-tenant cloud messaging service for Web developers
  • Some components inspired by Amazon's SQS, with additional semantics for event broadcasting
  • Fully RESTful API for sending messages between various components of their SaaS and mobile applications
  • Surfacing of events to end users and guest agents that run in the "over-cloud" layer
Shared File System Service (Manila)
  • Vendor-agnostic share management API
  • Create, delete, give/deny access to a share
  • Support for commercial storage appliances from: EMC, NetApp, HP, IBM, Oracle, Quobyte, and Hitachi Data Systems
  • Support for Red Hat's GlusterFS filesystem
DNSaaS (Designate)
  • DNS as a Service
Security API (Barbican)
  • REST API for secure storage, provisioning and management of secrets
  • Built for use in all environments, including large ephemeral clouds
AWS Compatibility

  • Interoperability with Amazon EC2 and Amazon S3
  • Minimal effort to port AWS client applications to OpenStack

Multi tenancy: AWS vs OpenStack

The first conceptual difference between AWS and OpenStack is about multi-tenancy. OpenStack offers a multi-layer tenant mechanism with domain and projects. A domain is a collection of users, groups, and projects, in a way parallel to AWS’s account. LDAP groups are attached to domains. OpenStack’s Project is a container of virtual resources such as virtual machines, networks and volumes. Using projects, users can establish several isolated and independently controlled groups of resources that serve different objectives. In the Kilo release, Keystone introduced the hierarchical multi-tenancy concept, using sub-projects.
Accommodating more than one million customers, Amazon Web Services is a multi-tenant cloud by nature; however, at the account level, a single user receives a single tenant experience. Having said that, AWS offers Virtual Private Cloud, VPC, which is somewhat parallel to OpenStack’s project. Amazon’s VPC lets the user provision a logically isolated section of the Amazon Web Services (AWS) cloud where the user can launch AWS resources in a virtual network that s/he defines.  AWS’s VPC is limited to one router and one IP block; though not compulsory, this is a common practice for OpenStack projects. It is worth noting that all EC2’s virtual networking capabilities are only available using VPC.
On the other hand, unlike OpenStack, Amazon’s VPC offers extremely valuable tools that simplify the establishment of secured connectivity between VPCs and between VPC and on-premise resources. The classical use case for enterprises is running the web servers or the entire customer facing application on Amazon’s public cloud, while keeping the rest of the servers on-premise. Through its API, Amazon allows the user to establish a VPN connection and even control the customer’s and AWS’s gateways. This is extremely valuable for enterprises that chose the hybrid cloud path, especially given the fact that Amazon has integrated its VPN gateway with the market leading VPN CPEs (customer premise equipment). OpenStack’s Neutron project does offers VPN as a Service capabilities (VPNaaS); however, it is experimental and lacks the end-to-end integration that Amazon provides.

Networking: Neutron vs AWS VPC

From the network perspective, while OpenStack provides control over the L2 elements of the virtual network, AWS exposes only subnets. OpenStack Neutron’s API allows granular control of elements such as ports (the connection point for attaching a virtual server to a virtual network) and the ability to allocate VLAN IDs that correspond to VLANs present in the physical network. This is especially useful for provider networks, which are mapped to existing physical networks in the data center. Those differences are again attributed to the different concepts of AWS and OpenStack. In the private cloud, the user manages the physical networking by himself. Thus, it is crucial for virtual networking to be fully integrated with the physical data center networking. However, the public cloud is a managed service that takes all the hassle of physical network management away from the user, therefore  providing control on L2 is irrelevant to the user.
As for Layer 3 networking, conceptually, Amazon’s AWS and OpenStack’s Neutron provides comparable capabilities. Both cloud services allow creation of network subnets. OpenStack allows use of several subnets on the same virtual network, although it is not a common practice. AWS allows users to define Elastic IP addresses, which are public IP addresses reachable from the Internet. OpenStack offers a similar mechanism, the floating IP, which is part of the virtual router’s API.
Both clouds provide routing services; in AWS each VPC includes an implicit virtual router and the API allows the user to set the routing table (which contains a set of rules, called routes, that are used to determine where network traffic is directed). OpenStack’s Neutron API also allows management of the routing table;  however, it also allows management of the router entities themselves and does not limiting the number of routers per project. Moreover, a single router can be connected to more than one project.
From a network security perspective, AWS and OpenStack offer similar mechanisms. Security groups are used to inspect the traffic at the instance level. Networking ACLs and virtual firewalls are used by AWS and OpenStack respectively to inspect traffic going between subnets. There are minor nuances unique to each API; however, the general concept is very similar.

Here we compare some main components of AWS and OpenStack

Compute

 Why you need it?

To run an application you need a server with CPU, memory and storage, with or without pre-installed operating systems and applications.
OpenStack
AWS
Definition
Compute is virtual machines/servers
Instance
Instance/VM
Sizes
How much memory and CPU and temporary (ephemeral) storage is assigned to the instances/VM.
Flavors: Variety of sizes: micro, small, medium, large etc.
Variety of sizes: micro, small, medium, large etc.
Operating systems offered
What operating systems does the cloud offer to end-users
Whatever operating systems the cloud administrators host on the OpenStack cloud. (Red Hat certifies Microsoft Windows, RHEL and SUSE)
AMIs provided by the AWS marketplace.
Templates/images
A base configuration of a virtual machine, from which other virtual machines can be created.
Catalogs of virtual machine images can be created from which users can select a virtual machine.
Glance 
OpenStack administrators upload images and create catalogs for users.
Users can upload their own images.
(AMI) Amazon Machine Image
AWS provides an online marketplace of pre-defined images.
Users can upload their own images.

Networking

 Why you need it?
To network virtual servers to each otherYou also need to control who can access the server. You want to protect/firewall the server especially if it is exposed to the Internet.
OpenStack
AWS
Definition
Networking provides connectivity for users to virtual machines. Connects virtual machines to one another and to external networks (the Internet).  
Neutron
Networking
A private IP address internal only and non-routable to the Internet
Every virtual instance is automatically assigned a private IP address, typically using DHCP.
AWS allocates a private IP address for the instance using DHCP.
Public IP address
A floating IP is a public IP address, that you can dynamically add to a running virtual instance.
AWS public IP address is mapped to the primary private IP address.
Networking service
You can create networks and networking functions, eg. L3 forwarding, NAT, edge firewalls, and IPsec VPN.
Virtual routers or switches can be added if you use AWS VPC, a virtual public cloud.
Load Balance VM traffic
OpenStack LBaaS (Load Balancing as a Service) balances traffic from one network to application services.
ELB (Elastic Load Balancing) automatically distributes incoming application traffic across Amazon EC2 instances.
DNS.
Manage the DNS entries for your virtual servers and web applications.
The OpenStack DNS project (Designate) is in “incubation” and is not part of core OpenStack (as of the April 2015 Kilo release).
Route 53 –  AWS’s DNS service.
SRIOV
A method of device virtualization that provides higher I/O performance and lower CPU utilization compared to traditional implementations.
Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging or they may be extended to an upstream physical switch (IEEE 802.1br).
AWS support enhanced networking capabilities using SR-IOV, provides higher packet per second (PPS) performance, lower inter-instance latencies, and very low network jitter.

Monitoring

Why you need it?

You get insight into usage patterns and utilization of the physical and virtual resources. You may want to account for individual usage and optionally bill users for their usage.
OpenStack
AWS
Definition
Monitoring provides metering and usage of the cloud.  
Ceilometer
Cloudwatch
System-wide metering and usage.
Option to bill users for their usage
To collect measurements of the utilization of the physical and virtual resources comprising deployed clouds.
Persist data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.
Monitoring service for AWS cloud resources and the applications  on AWS.
Collect and track metrics, collect and monitor log files, and set alarms.

Security

Why you need it?

You need the  option of public key cryptography for SSH and password decryption. You want to firewall virtual machines to only allow certain traffic in (ingress) or out (egress).


OpenStack
AWS
Definition
Control access to your virtual machines.  
Keypairs, security groups.
Keypairs, security groups.
Key pairs
To login to your VM or instance, you must create a key pair.
Linux: used to SSH.
Windows: used to decrypt the Administrator password.
When you launch a virtual machine, you can inject a key pair, which provides SSH access to your instance.
To log in to your instance, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance.
Assign and control access to VM instances.
A security group is a named collection of network access rules that limit the traffic that access an instance.
When you launch an instance, you can assign one or more security groups to it.
Supported
Supported

Monday, 22 May 2017

Software Defined Networking (SDN) ?????

What is SDN?

The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.

Software-Defined Networking (SDN) is an emerging architecture that is dynamic, manageable, cost-effective, and adaptable, making it ideal for the high-bandwidth, dynamic nature of today's applications. This architecture decouples the network control and forwarding functions enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. The OpenFlow® protocol is a foundational element for building SDN solutions. The SDN architecture is:
  • Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.
sdn-3layers

Computing Trends are Driving Network Change

SDN addresses the fact that the static architecture of conventional networks is ill-suited to the dynamic computing and storage needs of today’s data centers, campuses, and carrier environments. The key computing trends driving the need for a new network paradigm include:
  • Changing traffic patterns: Applications that commonly access geographically distributed databases and servers through public and private clouds require extremely flexible traffic management and access to bandwidth on demand.
  • The “consumerization of IT”: The Bring Your Own Device (BYOD) trend requires networks that are both flexible and secure.
  • The rise of cloud services: Users expect on-demand access to applications, infrastructure, and other IT resources.
  • “Big data” means more bandwidth: Handling today’s mega datasets requires massive parallel processing that is fueling a constant demand for additional capacity and any-to-any connectivity.
In trying to meet the networking requirements posed by evolving computing trends, network designers find themselves constrained by the limitations of current networks:
  • Complexity that leads to stasis: Adding or moving devices and implementing network-wide policies are complex, time-consuming, and primarily manual endeavors that risk service disruption, discouraging network changes.
  • Inability to scale: The time-honored approach of link oversubscription to provision scalability is not effective with the dynamic traffic patterns in virtualized networks—a problem that is even more pronounced in service provider networks with large-scale parallel processing algorithms and associated datasets across an entire computing pool.
  • Vendor dependence: Lengthy vendor equipment product cycles and a lack of standard, open interfaces limit the ability of network operators to tailor the network to their individual environments.

Software-Defined Networking is Not OpenFlow

Often people point to OpenFlow as being synonymous with software-defined networking, but it is only a single element in the overall SDN architecture. OpenFlow is an open standard for a communications protocol that enables the control plane to interact with the forwarding plane. It must be noted that OpenFlow is not the only protocol available or in development for SDN.

The Benefits of Software Defined Networking

Offering a centralized, programmable network that can dynamically provision so as to address the changing needs of businesses, software-define networking also provides the following benefits:
  • Directly Programable:  Network directly programmable because the control functions are decoupled from forwarding functions.which enable the network to be programmatically configured by proprietary or open source automation tools, including OpenStack, Puppet, and Chef.
  • Centralized Management:  Network intelligence is logically centralized in SDN controller software that maintains a global view of the network, which appears to applications and policy engines as a single, logical switch.
    Reduce CapEx
    : Software Defined Networking potentially limits the need to purchase purpose-built, ASIC-based networking hardware, and instead supports pay-as-you-grow models
  • Reduce OpEX: SDN enables algorithmic control of the network of network elements (such as hardware or software switches / routers that are increasingly programmable, making it easier to design, deploy, manage, and scale networks. The ability to automate provisioning and orchestration optimizes service availability and reliability by reducing overall management time and the chance for human error.
  • Deliver Agility and Flexibility: Software Defined Networking helps organizations rapidly deploy new applications, services, and infrastructure to quickly meet changing business goals and objectives.
  • Enable Innovation: SDN enables organizations to create new types of applications, services, and business models that can offer new revenue streams and more value from the network.

Why Software Defined Networking Now?

Social media, mobile devices, and cloud computing are pushing traditional networks to their limits. Compute and storage have benefited from incredible innovations in virtualization and automation, but those benefits are constrained by limitations in the network. Administrators may spin up new compute and storage instances in minutes, only to be held up for weeks by rigid and oftentimes manual network operations.
Software-defined networking has the potential to revolutionize legacy data centers by providing a flexible way to control the network so it can function more like the virtualized versions of compute and storage today.

Software Defined Networking Use Cases

As detailed above, Software Defined Networking offers several benefits for businesses trying to move into a virtual environment. There are a multitude of use cases that SDN offers for different organizations, including carrier and service providers, cloud and data centers, as well as enterprise campuses.
For carrier and service providers, Software-Defined Networking offers bandwidth on demand, which gives controls on carrier links to request additional bandwidth when necessary, as well as WAN optimization and bandwidth calendaring. For cloud and data centers, network virtualization for multi-tenants is an important use case as it offers better utilization of resources and faster turnaround times for creating a segregated network. Enterprise campuses experience network access control and network monitoring when using Software-Defined Networking policies.

With SDN, the administrator can change any network switch's rules when necessary -- prioritizing, de-prioritizing or even blocking specific types of packets with a very granular level of control. This is especially helpful in a cloud computing multi-tenant architecture, because it allows the administrator to manage traffic loads in a flexible and more efficient manner. Essentially, this allows the administrator to use less expensive commodity switches and have more control over network traffic flow than ever before.

Friday, 19 May 2017

Intro to NFV???

Network function virtualization (NFV) also known as virtual network function (VNF)) offers a new way to design, deploy and manage networking services. NFV decouples the network functions, such as network address translation (NAT), firewalling, intrusion detection, domain name service (DNS), and caching, to name a few, from proprietary hardware appliances so they can run in software.
It’s designed to consolidate and deliver the networking components needed to support a fully virtualized infrastructure – including virtual servers, storage, and even other networks. It utilizes standard IT virtualization technologies that run on high-volume service, switch and storage hardware to virtualize network functions. It is applicable to any data plane processing or control plane function in both wired and wireless network infrastructures.

How a Managed Router Service Can be Deployed with NFV

Example of a Managed Router Service would be Deployed with Network Functions Virtualization NFV
Sample Network Functions Virtualization NFV deployment

Background


Product development within the telecommunication industry has traditionally followed rigorous standards for stability, protocol adherence and quality, reflected by the use of the term carrier grade to designate equipment demonstrating this reliability.While this model worked well in the past, it inevitably led to long product cycles, a slow pace of development and reliance on proprietary or specific hardware, e.g., bespoke application-specific integrated circuits (ASICs). The rise of significant competition in communication services from fast-moving organizations operating at large scale on the public Internet (such as Google Talk, Skype, Netflix) has spurred service providers to look for ways to disrupt the status quo.

History of Network Functions Virtualization

The concept originated from service providers who were looking to accelerate the deployment of new network services to support their revenue and growth objectives. The constraints of hardware-based appliances led them to applying standard IT virtualization technologies to their networks. To accelerate progress towards this common goal, several providers came together and created the European Telecommunications Standards Institute (ETSI).
The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV), a group charged with developing requirements and architecture for virtualization for various functions within telecoms networks, such as standards like NFV MANO. ETSI is also instrumental in collaborative projects like the newly announced OPNFV.

NFV Framework

The NFV framework consists of three main components:
  1. Virtualized network functions (VNFs) are software implementations of network functions that can be deployed on a network functions virtualization infrastructure (NFVI).
  2. Network functions virtualization infrastructure (NFVI) is the totality of all hardware and software components that build the environment where VNFs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is considered as part of the NFV infrastructure.
  3. Network functions virtualization management and orchestration architectural framework (NFV-MANO Architectural Framework) is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.

The building block for both the NFVI and the NFV-MANO is the NFV platform. In the NFVI role, it consists of both virtual and physical processing and storage resources, and virtualization software. In its NFV-MANO role it consists of VNF and NFVI managers and virtualization software operating on a hardware controller. The NFV platform implements carrier-grade features used to manage and monitor the platform components, recover from failures and provide effective security - all required for the public carrier network.

The Benefits of Network Functions Virtualization

NFV virtualizes network services via software to enable operators to:

  • Reduce CapEx: reducing the need to purchase purpose-built hardware and supporting pay-as-you-grow models to eliminate wasteful over-provisioning.
  • Reduce OpEX: reducing space, power and cooling requirements of equipment and simplifying the roll out and management of network services.
  • Accelerate Time-to-Market: reducing the time to deploy new networking services to support changing business requirements, seize new market opportunities and improve return on investment of new services. Also lowers the risks associated with rolling out new services, allowing providers to easily trial and evolve services to determine what best meets the needs of customers.
  • Deliver Agility and Flexibility: quickly scale up or down services to address changing demands; support innovation by enabling services to be delivered via software on any industry-standard server hardware.

Distributed NFV

The initial perception of NFV was that virtualized capability should be implemented in data centers. This approach works in many – but not all – cases. NFV presumes and emphasizes the widest possible flexibility as to the physical location of the virtualized functions.
Ideally, therefore, virtualized functions should be located where they are the most effective and least expensive. That means a service provider should be free to locate NFV in all possible locations, from the data center to the network node to the customer premises. This approach, known as distributed NFV, has been emphasized from the beginning as NFV was being developed and standardized, and is prominent in the recently released NFV ISG documents.
For some cases there are clear advantages for a service provider to locate this virtualized functionality at the customer premises. These advantages range from economics to performance to the feasibility of the functions being virtualized.
The first ETSI NFV ISG-approved public multi-vendor proof of concept (PoC) of D-NFV was conducted by Cyan, Inc., RAD, Fortinet and Certes Networks in Chicago in June, 2014, and was sponsored by CenturyLink. It was based on RAD’s dedicated customer-edge D-NFV equipment running Fortinet’s Next Generation Firewall (NGFW) and Certes Networks’ virtual encryption/decryption engine as Virtual Network Functions (VNFs) with Cyan’s Blue Planet system orchestrating the entire ecosystem. RAD's D-NFV solution, a Layer 2/Layer 3 network termination unit (NTU) equipped with a D-NFV X86 server module that functions as a virtualization engine at the customer edge, became commercially available by the end of that month. During 2014 RAD also had organized a D-NFV Alliance, an ecosystem of vendors and international systems integrators specializing in new NFV applications.

In our next blogs we discuss SDN and also what is difference between NFV and SDN. Keep reading and follow blog.

Monday, 15 May 2017

Containers in Cloud Computing

At the moment, cloud containers are a hot topic in the IT world in general, and security in particular. The world's top technology companies, including Microsoft, Google and Facebook, all use them. Although it's still early days, containers are seeing increasing use in production environments. Containers promise a streamlined, easy-to-deploy and secure method of implementing specific infrastructure requirements, and they also offer an alternative to virtual machines.

Thursday, 4 May 2017

Hypervisors????

A hypervisor or virtual machine monitor (VMM) is computer software, firmware, or hardware, that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and OS X instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor, with hyper- used as a stronger variant of super. The term dates to circa 1970, in the earlier CP/CMS (1967) system the term Control Program was used instead.

Classification


Type-1 and type-2 hypervisors
Type-1, native or bare-metal hypervisors
These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors. These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM's z/VM). Modern equivalents include Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V and VMware ESX/ESXi.
Type-2 or hosted hypervisors
These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.


However, the distinction between these two types is not necessarily clear. Linux's Kernel-based Virtual Machine (KVM) and FreeBSD's bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor. At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with other applications competing for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.

The modern hypervisor: A high-level explanation

The evolution of virtualization greatly revolves around one piece of very important software. This is the hypervisor. As an integral component, this software piece allows for physical devices to share their resources amongst virtual machines running as guests on to top of that physical hardware. To further clarify the technology, it’s important to analyze a few key definitions:
  • Type I Hypervisor. This type of hypervisor (pictured at the beginning of the article) is deployed as a bare-metal installation. This means that the first thing to be installed on a server as the operating system will be the hypervisor. The benefit of this software is that the hypervisor will communicate directly with the underlying physical server hardware. Those resources are then paravirtualized and delivered to the running VMs. This is the preferred method for many production systems.
  • Type II Hypervisor. This model (shown below) is also known as a hosted hypervisor. The software is not installed onto the bare-metal, but instead is loaded on top of an already live operating system. For example, a server running Windows Server 2008R2 can have VMware Workstation 8 installed on top of that OS. Although there is an extra hop for the resources to take when they pass through to the VM – the latency is minimal and with today’s modern software enhancements, the hypervisor can still perform optimally.
  • Guest Machine. A guest machine, also known as a virtual machine (VM) is the workload installed on top of the hypervisor. This can be a virtual appliance, operating system or other type of virtualization-ready workload. This guest machine will, for all intents and purposes, believe that it is its own unit with its own dedicated resources. So, instead of using a physical server for just one purpose, virtualization allows for multiple VMs to run on top of that physical host. All of this happens while resources are intelligently shared between other VMs.
  • Host Machine.  This is known as the physical host. Within virtualization, there may be several components – SAN, LAN, wiring, and so on. In this case, we are focusing on the resources located on the physical server. The resource can include RAM and CPU. These are then divided between VMs and distributed as the administrator sees fit. So, a machine needing more RAM (a domain controller) would receive that allocation, while a less important VM (a licensing server for example) would have fewer resources. With today’s hypervisor technologies, many of these resources can be dynamically allocated.
  • Paravirtualization Tools. After the guest VM is installed on top of the hypervisor, there usually is a set of tools which are installed into the guest VM. These tools provide a set of operations and drivers for the guest VM to run more optimally. For example, although natively installed drivers for a NIC will work, paravirtualized NIC drivers will communicate with the underlying physical layer much more efficiently. Furthermore, advanced networking configurations become a reality when paravirtualized NIC drivers are deployed.


Now that there is a better understanding of the hypervisor and the various components which fall under it, we can examine the major players in the industry.

The “Big Three” Hypervisor Vendors

Although there are several smaller organizations which are developing their own hypervisor technologies, three manufacturers have really taken the market by storm with their solutions. As leaders in the space, there are specific differentiators between the products. This doesn’t mean one is necessarily better than the other;  rather, it means that there may be a better fit for one hypervisor over another.
VMware vSphere 5VMware has come a long way in the hypervisor market. It still commands the top spot within the server virtualization world and makes some of the best application and desktop virtualization technologies. As VMware continues to evolve, the product is known for its feature-rich suite capable of some very powerful solutions. Integration with DR, SAN, LAN and WAN technologies makes VMware an intricate component to many modern data center environments. One of the biggest challenges facing a potential customer is understanding which feature and license set is required for a given project. Prior to any purchase, the IT team should plan out the deployment and have a clear picture to which features are key for the environment.
Citrix XenServer 6 – As a leader in the application virtualization and delivery markets, it was only a matter of time before a hypervisor came into the picture. Originally known as XenSource, XenServer was born from an open source world and built upon enterprise technologies. Development of the XenServer platform has come a long way. Now, more organizations are deploying this technology into their production and test systems which are capable of handling a global load. The entry price point for the hypervisor is very enticing and the Enterprise version contains many of the necessary enterprise features which administrators demand. Failover, HA, shared resources and other key components are all native to the hypervisor and are ready for production rollouts. Arguably second in the server virtualization space, XenServer continues to innovate and expand its product offering.
Microsoft Hyper-V 3 – With the latest release of Windows Server just around the corner, there has been a lot of excited conversation revolving around the latest iteration of Microsoft’s Hyper-V. Already a solid platform, some limitations are being directly addressed with the new released. Live migration, storage resource pools and even cloud backup capabilities are all being built into the new hypervisor technology. Couple that with the very low price point to purchase – and we may very well see a new powerhouse emerging. The newest version of Hyper-V looks to take some serious market-share away from both Citrix and VMware as it deploys a more production and enterprise ready hypervisor. The direct integration with Windows Server systems will make this product even more enticing.

Although VMware is currently the market leader within the virtualization space, others are quickly emerging as leaders in their space. Remember, in this article, we’ve mainly been discussing server virtualization. As industry demands have grown, technologies have expanded beyond the server. Now, there are technologies for application virtualization (ThinApp and XenApp) as well as technologies which can virtualize desktops (XenDesktop and View). The entire idea behind a virtual data center is to create a more efficient environment which is easier to manage and orchestrate. Furthermore, concepts such as DR and physical server consolidation are all made easier when virtualization is introduced. There is very little doubt that the technology will continue to be adopted and improved. The exciting part is watching how further innovations can help organizations of all sizes align their business goals with their IT infrastructure.

What are hypervisors used for?

Hypervisors are important to any system administrator or system operator because virtualization adds a crucial layer of management and control over the data center and enterprise environment. Staff members not only need to understand how the respective hypervisor works, but also how to operate supporting functionality such as VM configuration, migration and snapshots.

The role of a hypervisor is also expanding. For example, storage hypervisors are used to virtualize all of the storage resources in the environment to create centralized storage pools that administrators can provision -- without having to concern themselves with where the storage was physically located. Today, storage hypervisors are a key element of software-defined storage. Networks are also being virtualized with hypervisors, allowing networks and network devices to be created, changed, managed and destroyed entirely through software without ever touching physical network devices. As with storage, network virtualization is appearing in broader software-defined network or software-defined data center platforms.