A virtual computing infrastructure, resource-efficient and secure.

sakib_sadman_sajib

Virtualization can be defined as the process of creating a virtual version of computer hardware (CPU, GPU, RAM), Operating System, Applications, Storage devices, and other computer network resources 1.The actual concept of virtualization is to diminish the age-old model of “one server, one application”. Virtualization can be used to utilize many unused hardware resources found in the conventional computing system which would ultimately increase the product efficiency.

computing_system

In a normal computing system, most of the computer resources are not in use and all the servers consume excessive electricity. If every user has individual physical computers, IT has more work and cost to upgrade them, licensing software of each of them and troubleshooting if anything goes wrong. Virtualized computing system is the perfect solution to it. It is done using a software called ‘Hypervisor’ e.g.- Hyper-V by Microsoft, XenApp by Citrix, VMWare.

Hypervisor divides the unused resources into desired-sized software containers which are commonly known as Virtual machines. Any operating systems can be installed in these physical computers alike working virtual machines, referring to the installment of multiple parallel running Operating System (OS). Using the system, multiple applications can be operated/used in a single physical server. Virtualization is highly recommended for big companies – who are in pursuit of a cost efficient, fast, redundant, mission critical computing system. This system has been designed targeting huge Enterprise solutions.

This infrastructure will enable the user to have zero server down-time besides acquiring higher efficiency, low energy wastage and cost reduction. As all the hardware resources are centralized, it is easier for IT administrators to maintain the servers, update the software, maintain licensing and troubleshooting.

The process of building a virtualized computing infrastructure fully depends on the client’s need and workload.The basic blueprint of the infrastructure is given below:

Most of the companies use proprietary hardware specially designed for their workload. For better understanding of the infrastructure, only one central server is used to demonstrate. This server will be configured with most of the processing power with a minimum amount of storage to install the OS and save the hypervisor settings. Additional storage can be added depending on the need. The choice of Hypervisor also depends on the needs and workload of the company.

network_interface

The Network Interface Card (NIC) in the central server/s must be Gigabit Ethernet (GbE) as the minimum requirement, but 10 GbE is recommended for the optimal data transfer to and from the central server. All the other network connections should be GbE connection.

A different Data Center/Network Attached Storage (NAS) is to be configured. Here, the drives need to arrange in a RAID 6 (striping with double parity). SSD are recommended for the drives were the OSs are installed for quick boot time but 15000 RPM SAS HDD are the minimum requirement. Data can be stored in large hard drives, each containing 10 TB of storage approx. This will ensure all the data are stored in a redundant and faultless system.

A powerful router necessary because most of the services of this infrastructure is provided through network. The hardware selection for the router depends on the need of the company. But all the NIC used in the router are recommended to be 10 GbE. The router should have caching server, firewall, antivirus installed in it. Larger corporations might use proprietary software/firmware for their networking routers, and small companies can use open source router OSs like pfSense.

An approximate of 40-45 devices might be connected to the wired network. So, a 48 port Gigabit Network switch can be used. The number and type of network switch/switches can be changed depending on the specific company’s necessity.

There can be multiple node in the network. Each node will have a network router, which will keep log of its user. N.B.: Despite having multiple routers, the central router will control all the DHCP clients. The wireless broadcasting standard for the additional routers is recommended to set at IEEE 802.11ac at 5 GHz band.

OS will be installed fitting the need of the company. We recommend using the latest OS offered by Microsoft because it has most supported platform for office.

A different user account is needed to be opened for every user. These users are to arranged in groups, and the group policies are to be updated to the company’s employee policy. This allows perfect implementation of policies where they can restrict or allow groups to apps and services.

To use this infrastructure, some clients can be connected to the network. It is recommended to connect the thin desktop PCs using the wired connection. The moving clients e.g. smartphones, tablets, laptop PCs might be connected through the wireless network.

The Apps that are needed for your office can be virtualized using software like Microsoft App-V, so that a user can use the app without logging into a whole new OS. This is enable profile vitalization. So, that every user can save their own settings and user data separately.

This infrastructure can also be implemented in a much higher scale. But the basics are the same.

An independent research conducted by VMware showed that a modern virtualization platform with operations management capabilities enables a 67% gain in IT productivity, 36% reduction in application downtime, 30% increase in hardware savings, 26% decrease in time spent troubleshooting 2.

The IT administrator have the administrator rights in the system for all computers, he can assign permission to users in such a way that he can specify only a single or some apps for a user, the user can’t access other apps, he can restrict installation of new software without his permission, he can restrict website, so that no user under that network can access those sites. This might boost productivity. The IT staffs can also supervise all the computers, so no abuse of the network is tolerated.

We can see that all the problems of a normal computing system are completely solved. Now, companies can have a stress free, more productive computing system with less cost and power.

Sakib Sadman Shajib is a student of Notre Dame College, Dhaka. He can be reached at contact@sakibsadmanshajib.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

Bookmark the permalink.

3 Comments

  1. I don’t know who you are but this was a fine read.

  2. Fabulous article. Keep it up. ??

  3. Thank you so much for providing individuals with a very marvellous opportunity to read articles and blog posts from this blog. It can be very fantastic plus packed with a good time for me and my office mates to search your site at the least thrice in 7 days to learn the new guidance you will have. Not to mention, I’m so usually satisfied concerning the surprising opinions you give. Some two points in this post are basically the most beneficial I have had.

Leave a Reply

Your email address will not be published. Required fields are marked *