Types and operational principle of a piston air compressor

Zabidul Islam

An air compressor is a unit that provides potential energy by means of highly compressed air stored in an air tank under a certain PSI (Pound Per Square Inch) depending on the capacity of the tank and power of the motor. Later on, the air compressed under high pressure is utilized as energy to power air tools like nail guns, staple gun, blower and many more.
Once, the use of compressor was limited to automobile shop for flat tire inflation. With the advancement of science, the use of compressed air is explored and used in many applications using varied types of pneumatic tools.

Today air compressor is utilized in many applications like sanding, sinking nails of different gauges and sizes, removing lug nut, making a hole on wall or metal, and so many. Given that many homeowners, hobbyists, DIYers started to have this useful machine at their home. Fundamentally, an air compressor can be classified into 3 types: According to transported pressure, the design and operational principle and finally air compression ratio. Piston type, rotary-screw type, and vane type air compressor go under the third fundamental categories of an air compressor.

Among the three types, piston type and rotary-screw type compressors are commonly used out there. Whereas rotary-screw compressors are meant for heavy-duty applications in industries, piston-type compressors are engineered for light to medium duty applications in the home, garage, and gas stations.

Before demonstrating working principle of a piston type air compressor, it is pre-requisite to get introduced to the key mechanism of an air compressor. There are mainly three parts of an air compressor: the electric motor, pump, and the receiver (tank).

The main purpose of an electric motor is to power the pump via a flywheel and crankshaft. Basing on the nature of users, air compressor manufacturing companies have brought to market motors of different power sources. They can be gasoline powered, electric powered or even diesel powered.

The tank of an air compressor comes in varied sizes, shapes and built materials. The smallest air tank available in the market is a half gallon and there is no limit for the highest.  Air tanks having a 60-gallon capacity or more are generally placed vertically featured with a wheelbarrow so that users can haul it around with ease.  Commercial grade compressor tanks are constructed with cast iron to ensure maximum safety at the job site. Aluminum built tanks are also available but those are specially meant for domestic use to make the compressor light and highly portable.

Next is the pump which is the central components when it comes to an air compressor. The purpose of the pump is to compress air and drain it to the receiver. Pumps available in the market are profiled with either oil-lubrication feature or oil-free pumps. Oiled pumps tend to last longer than its counterpart oil-less pump.

Let’s plunge into the operational functions of an air compressor. A piston type compressor compresses air in a cylinder using a piston similar to automobile engines. There are few basic components in a pump that together complete the compression processing. They are a cylinder, connecting rod, piston, crankshaft, suction valve, discharge valve, and head. The pumps are perfectly sealed both internally and externally to make it airtight.  The piston rings mounted on a piston seal the cylinder from inside. A packing gland is used to seal the pump from outside.

One stage pump has one cylinder and two stage pumps have two cylinders installed either in V shape or in line.   Whatever the shape of a pump is, the inside of a pump is hollow round shaped like a pipe which is called cylinder. The piston moves up and down the cylinder to draw in and push out air. Piston’s one-time travel from up to down is called a stroke.

Both inlet and Outlet valves are located on the head of the cylinder. The backward moving piston minimizes pressures inside the cylinder, as a result, the suction valve gets open allowing air enter the cylinder. Similarly, the forward stroke pushes the air out through the discharge valve letting the suction valve close. Because of the pressure differences, both the valves get off and on synchronizing with the movement of the piston.

This part is completed by piston and cylinder. Then what is the functionality of other components? The connecting rod is attached to both the motor (via a crankshaft) and the piston. The motor converts rotational energy into kinetic energy causing the piston moves up and down. This process continues until the receiver or air tank gets refilled with the required amount of air volume.

Source: Air Compressor Agency

Zabidul Islam Razib is a freelance writer and can be reached at saiful431241@gmail.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution, and Photo.

Can Artificial Intelligence Beat Human Intelligence?

Tilova Sumaiya

Lots of controversies I almost always hear about that “Does Artificial intelligence (AI) beat human intelligence?”  Yes, definitely this question comes in our mind since we can see the maximum capabilities of AI. At this era where we’re standing, we can say that Artificial Intelligence is a miracle invention of human brains.  We will be able to depend on it almost entirely within a very short time that we can predict. It has made possible lots of impossible things which were beyond our imaginations. So in my today’s writing, I’ll try to solve the answer for those who are more curious to know about this answer, based on few logics and illustrations.

In a short, we can say Artificial Intelligence is an intelligence which is been developed by a software or a machine. AI research is highly technical and specialized and is deeply divided into sub fields that often fail to communicate with each other. AI research is also divided by several technical issues. A few sub fields focus on to solve the critical reasoning. The Supreme area and problem where Artificial intelligence is working are reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects. The more areas include recently which are statistical methods, computational intelligence, and traditional symbolic Artificial intelligence. Artificial intelligence has another core part which is machine learning. Machine learning and data science are both dominating modern world. Different Startup’s around the world which is focusing on machine learning & data science, exploring new business innovations and bring success with the combination of both. Recently Microsoft has cited a study that noted over the past two years, above $1 billion of venture capital invested in machine learning & data science. For the new cohort about cognitive technologies which is considered one of the prominent technologies of future, Microsoft made a cross country tour in US & Canada across twelve cities to select 10 new Startup for investment.

It’s been an almost millennium, we have been hearing lots of prognosis about AI (Artificial Intelligence) that, it will take over the world.  At the 18th century, a couple of scientists predicted that within 10 years a digital would be the world’s chess champion. Though it happened at 1996, their predictions didn’t go wrong.  They also predicted that within 3 to 8 years there will be a machine with a general Intelligence of an average human being. British mathematician Alan Turning was one of the first humans who came up with the idea of machines that think in 1950. Turning test, which he had created is still used today, as a benchmark to determine a machine’s ability to think like a human. And after then, in the mid-1950 after Turning died, the term Artificial Intelligence got very well known to people. Elon Musk, The founder of Tesla, Stephen Hawking had described the promising future of AI technology. Today we all know, AI is using at every single sector and every single thing. At this century, AI is taking place in our everyday lives. We are hoping, tomorrow’s world will be replaced by Robot instead of employees in the firms and also in every sector.

Human intelligence:

Human intelligence refers actually functions of our mind which are developed by the capabilities to learn from the past incidents, experiences, lessons from our parents, teachers, friends, and surroundings, accustomedness with new circumstances, approaching towards the intellectual ideas and the ability to turn our own environment using acquired knowledge.

Core Differences between Artificial Intelligence and Human Intelligence:

  • Human intelligence (HI) focuses around accustoming to the environment using its cognitive process where as Artificial intelligence (AI) focuses on outlining machines that can work like human behavior. Human intelligence is the outcome of the natural evolutionary system, but Artificial means which is created by human.
  • Human intelligence is a specimen of the genuine circumstance where as AI is an attempt to model it. And in science, there is a differentiation between the actual circumstance and its equivalent model.
  • HI uses content memory and schema but AI is using the built in, outlining by scientist memory.
  • HI is bigger but AI, as the name suggests, is artificial and temporary. HI is dependable but AI is not, though there are people who debate that human makes more mistakes compared to AI.

Recently in this year, a major development of AI is, a computing system developed by Google has beaten a top human player at the game of GO, the ancient eastern contest. A machine has topped the best human at most games keep up as measures of human intellect, including chess, scrabble, Othello, even in jeopardy! But with go which is a 2500-year-old game, that’s more complicated than chess. Earlier in January of this year, top AI experts outsiders of Google questioned whether the development could happen anytime soon, and many predicted and believed another decade would take before a machine could beat the top humans but Google has proved that this has done. And then the French researcher Remi coulomb agreed and said that it happened faster than I thought.

So, as we can see, what we’re thinking today and predicting about AI, AI proves and solve those complexities faster than our imaginations. So based on the recent illustrations and the differences between Human intelligence and Artificial intelligence, as I have mentioned here in above, I can say, In near future AI have the potentialities that it can beat individual top intelligence human beings based on its own adaptive experiences, studies, and knowledge but when the question comes that whether it can develop its own intelligence by itself then answer will be “NO” which clearly refers its limitations to human intelligence.

Tilova Sumaiya Khan Priyanka is a Business Analyst in Silicon Valley Nest. She has completed her graduation from University of Dhaka. She can be reached at tilova_priyanka@yahoo.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution, and Photo.

Rainwater: alternative or the only drinking water source of coastal belt in Bangladesh?


The coastal region of Bangladesh covers 20% of total land of Bangladesh with 711 km long coastline. The coastal zone of Bangladesh consists of Bagerhat,Barguna, Barisal, Bhola, Chandpur, Chittagong, Cox’sBazar, Feni, Gopalganj, Jessore, Jhalkati, Lakshmipur, Narail, Noakhali, Patuakhali, Pirojpur, Satkhira andShariatpur District. The coastal line areas are considered as the most vulnerable area in respect of safe drinking water. Groundwater practice for the coastal area is uncertain. The easily accessed aquifers are extremely saline affected. More than 10 million people are fighting against the same challenge. Though some aquifer depth of 300 to 400 meter is suitable, it is not adequate for the large number of people. Due to the impact of climate change, frequent storm surge events affect sweet water ponds and shallow tube wells.Having surface water without any treatment is not hygienic to drink. Government, local NGOs and international NGOs implemented different types of system for using surface water like pond sand filter (PSF), desalination system, Carocell and so on. As the system needs skilled people to operate, the system becomes inactive after a certain period.  So it is a matter of regret that both options of ground water and surface water are almost inactive to meet the drinking water demand in the coastal area in Bangladesh. Even in the dry season, there is serious scarcity of sweet surface water source. Sometimes they used to go more than 5 km for collecting drinking water only. Most of the coastal villagers are fishermen and farmers. They are not capable of paying for bottled water. Only one option is remaining, that is rainwater. Rainwater harvesting is the most primitive practice in Bangladesh. It is considered as an alternative water source in Bangladesh.

In the monsoon period of June to October, heavy rainfall occurs in Bangladesh. Bangladesh is also a tropical country. The highest rainfall occurs in some coastal district of Bangladesh. The coastal districts receive more than or equal 2700 mm rainfall per year. According to the information of Bangladesh Bureau of Statistics (BBS, 1997) 1.95 – 2.80 m3 of rainwater was available per square meter of catchment area each year for development of rainwater based water supply system. Rainwater harvesting for drinking purpose is common practice in rural area but its access is in limited scale. 36 percent of people in the coastal belt depend on rainwater only because they have no alternative options remaining. However, the protected ponds are replenished by rainwater each year are another significant source of highly saline prone area. However, rainwater water is relatively good. The rain water is free from arsenic contamination, salinity and other harmful infectious organisms and pathogens. Beside this the physical, chemical and bacteriological characteristics of harvested rainwater represent a suitable and acceptable means of potable water. Though it is not free from toxic impurities but it is easily avoidable. But the harvested rainwater quality may be affected due to long time storage. Bacteriological contamination like coliform can happen if the catchment is not properly cleaned. The first run off from the roof should be discarded to prevent entry of impurities from the roof. If the storage tank is clean, the bacteria or parasites carried with the flowing rainwater will tend to die off. Nowadays rain water harvesting refers to both large (community) and small (household) scale. A simple affordable, technically feasible and socially acceptable safe drinking water supply system in thecoastal rural area is very much in demand. In this circumstance, rainwater harvesting system can be considered as a probable solution of the drinking water problem in the arsenic and salinity affected areas.

In the present context, all main water supplies have converted and the alternative option of rainwater has taken place the main option to fulfill the drinking water demand.


Aminul Islam Sohan
International Correspondent,Association of Life Science and Engineering Writers (ALSEW)
Lecturer, Department of Civil Engineering, Mogadishu University, Somalia
Email: sohan.bd71@gmail.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

What do you need to know about password?


It’s now 2017; According to Moore’s Law our transistor count Vs dye size graph has hit its highest numbers. We’ve got 22-core Xeon E5-2699V4 in the enterprise side and the beast in the game 10-core 6950X for enthusiasts. Again we have the GPUs are developing at a much higher rate. Nvidia released the new Quadro P6000, the most powerful (enterprise) card in the market. It has 3840 CUDA cores (more on that later), 24 GB GDDR5X VRAM, which translates in effectively 24 TFLOPS (Tera Flops) of raw computing performance*. These advancements in processing units are pushing development much faster. It’s a really good time to be alive. But, on the other side of the spectrum, availability of these extremely powerful cards is making us vulnerable.

First let me inform about the most used process of storing passwords by websites. When you are putting your password to a website while creating an account, the website hashes your password using some algorithms. As a result most of the websites don’t even know what your password is, unless they are storing the password in plain text, which is very risky. The most widely implemented hash functions are MD5 (128-bit) and SHA1 (160-bit). These were really strong and effective way of storing passwords in the past, but not now days. If your password is just a string of small-letter character, it’ll take less than a second to crack your password using a simple brute force attack. Let’s simplify what I just mentioned. Suppose your password is a 12 small-letter character. So, when someone brute forces your password, they’ll just check every possible 12 small-letter characters. They’ll have to process 12^26 bits (1.1447546×10^28 bit) of data. This might seem a lot to you, but for modern GPUs, it’s a small piece of cake, they’ll need as small as a fraction of a second to process this much data. So, if your password has capital letters, numbers and special characters (like !, @, #, $, %, ^, &, * or something else) then your password will be much harder to brute force. But there are a lot more ways to crack them too. So, the users need to create passwords in such a way that it’s tough to crack them through brute force attack. And we should avoid using passwords like password1, 12345678, etc. Because of these are the most common and widely used passwords and there are password libraries which contains millions of hacked passwords, passwords which people used in real life. Please, never ever use the same password in multiple websites. Huge companies like LinkedIn, Yahoo are frequently getting hacked and the passwords are leaked in the internet. If all your passwords are same, then someone might use your hacked password to login to may be your Amazon account and use your credit card or do many dangerous things like using your account for causing harm to someone. Well if the developer of the website is intelligent enough, he’ll use some better hashing function such as SHA512 with salting. Salting is basically adding an extra random string of character which is different for every user to the hashed password. So even if the hashed passwords are compromised, the hacker won’t be able to crack the passwords, because the salts will make the hashing algorithm unpredictable.

Well, the summary to this long post is change every password you ever created, because I’m sure your passwords are weak. Mix small-capital letters, add numbers and special characters. Don’t use your name or any common words. And always use different passwords for different accounts. If you can’t remember those passwords then use password managing services like LastPass, True Key, Enpass, KeePassX, Dashlane, Padlock, Passbolt, etc they are free. And always use 2-step authentication or maybe even 3rd step authentication. These multiple steps of authentication might be a hastle to deal with, but these will literally make your account 99.99% can’t be hacked.

I know someone might come up with the idea of biometric authentication. I’ll discuss that in the next part. Yes, there is a next part. Stay tuned for the next part if you are interested, I guess.

Sakib Sadman Shajib is a student of Notre Dame College, Dhaka. He can be reached at contact@sakibsadmanshajib.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

Artificial Intelligence: A Beginning of a New Era


Though human brains have amazing abilities, the limitation is of its slow processing power leads us to use computers for faster performance. Since the invention of computers or machines, their capability to perform various tasks went on growing vastly. We have developed the power of computer systems in terms of their diverse working domains, their increasing speed, and reducing size with time. A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines as intelligent as human beings.

In 1956 John McCarty first used the word “Artificial Intelligence”. He is renowned as the father of Artificial Intelligence. According to him it is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way to make computers or robots or a software think intelligently or more humanly.  It is accomplished by understanding how a human brain works, how human thinks, how they decide and find a solution of the problem. The development of AI started with the intention of creating similar intelligence in machines that we find and regard high in humans.

Artificial Intelligence is a science which is based on Computer science, Biology, Psychology, Linguistics, Mathematics, and Engineering.

In the real world, the knowledge has some unexpected properties. Its volume is more than desired, next to unimaginable and not well-organized or well-formatted. Also It keeps changing constantly. By Artificial Intelligence or AI Technique is a manner to organize and use the knowledge efficiently in such a way that, it should be perceivable by the people who provide it. Also easily modifiable to correct errors and useful in many situations though it is incomplete or inaccurate. AI techniques elevate the speed of execution of the complex program it is equipped with. The goal is to achieve a system which is self-evolving. A system which learn, shows intelligent behavior, explain and advice to its user like a human think, understand, learn and behave.

Now a days AI have been dominating in various fields. In gaming it plays a crucial role. In strategic games such as chess, poker , tic-tac-toe, etc., where machine can think of large number of possible positions based on heuristic knowledge. By the help of AI, we can now interact with the computer that understands natural language spoken by humans which is known as Natural Language processing. Putting AI in some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users. Vision systems understand, interpreted as well as comprehend visual input on the computer. For example, spying aero plane takes photographs which are used to figure out spatial information or map of the areas. Then again Doctors use clinical expert system to diagnose the patient. Also Police use computer software that can recognize the face of criminal with the stored portrait made by forensic artist. Some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meanings while a human talks to it. It can handle different accents, slang words, noise in the background, change in human’s noise due to cold, etc. These Intelligent systems are developed by AI. The handwriting recognition software reads the written text on paper by a pen or can be on screen by a stylus. It can recognize the shapes of the letters and convert it into editable text. Robots are able to perform the tasks given by a human. They have sensors to detect physical data from the existing world such as invisible senses like light, temperature, movement, sound and pressure. They have efficient processors, multiple sensors and large memory to exhibit intelligence. In addition, they are able to learn from their mistakes and they can adapt to the new environment. Also another miracle of Artificial Intelligence.

Artificial Intelligence is making a big revolution in the history of human kind. Now a day most of the biggest IT company is using AI to assist in their research and developments. They are integrating AI to mobile devices and also to the daily technological devices. This will make our daily like more easy and comfortable. Google has developed their own AI. Also the Google app ‘Alo’ has a built in integrated AI. So we can see the AI is nothing but the future of computer science. With the Artificial Intelligence, we now achieve the things that once was unimaginable. So we should try to make this part of science much rich as possible.

Shahriar Alam is a student of Department of Computer Science & Engineering at East West University, Bangladesh.  He can be reached at s.shaown08@gmail.com


Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

Vulnerable Water Resource in Somalia


Water is the fundamental factor to sustain life. Water crisis is a big issue in Somalia. Due to the civil war in Somalia the hole system collapse. There is no permanent government that can take the responsibility to provide basic service to the people. Only some international NGOs working in Somalia for the betterment. If only water issue is raised except other serious issues, it is at stake. There is no enough water resource to fulfill the existing water demand, moreover, the quality of water is not standard.

According to the hydrological view there are only two rivers name “Jubba” and “Shabelle”. The run off coefficient of Jubba is 6.5% and Shabelle is 2.1%. High flood causes frequently in these river due to high level of river bed, embankment broken by people and closing the flood release channels etc. Unfortunately, drought is also a major problem in these two rivers. The lowest flow of the two river is zero or close to zero in several years. Salinity is an another problem for river water. Salinity is found maximum from the month April to June, moderate from December to march and slightly found in October and November.

There are only seven water basins and Gulf of Aden is the remarkable basin situated in northern zone of Somalia and it reaches to the Indian ocean. Except the Gulf of Aden, no basin can reach to the ocean as the rainfall evaporated and infiltrated quickly.

The wars and the barkads are most common water storage system in Somali village. These are the artificial catchment, water pan, pond or dam to accumulate storm water. The average size is 20 m long, 10 m width and 3.5m depth. As it is totally depending on storm water, there is no way to harvest the rainwater in dry season as a result, many people and livestock expire. A large number of people displace from village to the city in search of water only.

Boreholes, dug wells, springs and subsurface dams are the main source of groundwater. Though the surface water source is limited, these groundwater options are widely used to meet the minimum need of water for human and livestock’s. Due to the poor construction and common outlet for both livestock and humans, the quality of water falls. Average depth of wells are 2m-10m. On the other hand, borehole depth varies from 90m-220m depending of various zone. Many boreholes have been uncontrolled due to the unsustainable draw down of static water levels. Some springs are found in mountainous area of Somalia, but this is not enough to fulfill the water demand.

Water scarcity is the main obstacle for production and irrigation. The minimum annual rainfall is 20mm and maximum annual rainfall is 1350 mm. At the same time the annual potential evapo-transpiration is between 1500mm-3000mm which exceeds rainfall in all months of the year. So, crop production largely depends on riverine area of bay region. According to the information from Somalia Water and Land Information Management (SWALM), if 65% irrigation efficiency is considered, minimum 9230 m3 of water would be required per hector.

Water scarcity, salinity and electrical conductivity, turbidity, hardness are the main problem of drinking water. The urban area use artificial water resource. But it is a matter of regret that rural people have not enough resource of water in addition they drink water without any treatment. Sanitation system also vulnerable due to water. Besides environmental issues, the livestock’s, agriculture in the upshot the hole ecosystem is threatened only for the water. It is high time for the researchers, scientists, technologist and donors to come forward to help the feckless people.

Aminul Islam Sohan is Lecturer, Department of Civil Engineering, Mogadishu University, Somalia. He is also a international correspondent at Association of Life Science and Engineering Writers (ALSEW). He can be reached at sohan.bd71@gmail.com 

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

A virtual computing infrastructure, resource-efficient and secure.


Virtualization can be defined as the process of creating a virtual version of computer hardware (CPU, GPU, RAM), Operating System, Applications, Storage devices, and other computer network resources 1.The actual concept of virtualization is to diminish the age-old model of “one server, one application”. Virtualization can be used to utilize many unused hardware resources found in the conventional computing system which would ultimately increase the product efficiency.


In a normal computing system, most of the computer resources are not in use and all the servers consume excessive electricity. If every user has individual physical computers, IT has more work and cost to upgrade them, licensing software of each of them and troubleshooting if anything goes wrong. Virtualized computing system is the perfect solution to it. It is done using a software called ‘Hypervisor’ e.g.- Hyper-V by Microsoft, XenApp by Citrix, VMWare.

Hypervisor divides the unused resources into desired-sized software containers which are commonly known as Virtual machines. Any operating systems can be installed in these physical computers alike working virtual machines, referring to the installment of multiple parallel running Operating System (OS). Using the system, multiple applications can be operated/used in a single physical server. Virtualization is highly recommended for big companies – who are in pursuit of a cost efficient, fast, redundant, mission critical computing system. This system has been designed targeting huge Enterprise solutions.

This infrastructure will enable the user to have zero server down-time besides acquiring higher efficiency, low energy wastage and cost reduction. As all the hardware resources are centralized, it is easier for IT administrators to maintain the servers, update the software, maintain licensing and troubleshooting.

The process of building a virtualized computing infrastructure fully depends on the client’s need and workload.The basic blueprint of the infrastructure is given below:

Most of the companies use proprietary hardware specially designed for their workload. For better understanding of the infrastructure, only one central server is used to demonstrate. This server will be configured with most of the processing power with a minimum amount of storage to install the OS and save the hypervisor settings. Additional storage can be added depending on the need. The choice of Hypervisor also depends on the needs and workload of the company.


The Network Interface Card (NIC) in the central server/s must be Gigabit Ethernet (GbE) as the minimum requirement, but 10 GbE is recommended for the optimal data transfer to and from the central server. All the other network connections should be GbE connection.

A different Data Center/Network Attached Storage (NAS) is to be configured. Here, the drives need to arrange in a RAID 6 (striping with double parity). SSD are recommended for the drives were the OSs are installed for quick boot time but 15000 RPM SAS HDD are the minimum requirement. Data can be stored in large hard drives, each containing 10 TB of storage approx. This will ensure all the data are stored in a redundant and faultless system.

A powerful router necessary because most of the services of this infrastructure is provided through network. The hardware selection for the router depends on the need of the company. But all the NIC used in the router are recommended to be 10 GbE. The router should have caching server, firewall, antivirus installed in it. Larger corporations might use proprietary software/firmware for their networking routers, and small companies can use open source router OSs like pfSense.

An approximate of 40-45 devices might be connected to the wired network. So, a 48 port Gigabit Network switch can be used. The number and type of network switch/switches can be changed depending on the specific company’s necessity.

There can be multiple node in the network. Each node will have a network router, which will keep log of its user. N.B.: Despite having multiple routers, the central router will control all the DHCP clients. The wireless broadcasting standard for the additional routers is recommended to set at IEEE 802.11ac at 5 GHz band.

OS will be installed fitting the need of the company. We recommend using the latest OS offered by Microsoft because it has most supported platform for office.

A different user account is needed to be opened for every user. These users are to arranged in groups, and the group policies are to be updated to the company’s employee policy. This allows perfect implementation of policies where they can restrict or allow groups to apps and services.

To use this infrastructure, some clients can be connected to the network. It is recommended to connect the thin desktop PCs using the wired connection. The moving clients e.g. smartphones, tablets, laptop PCs might be connected through the wireless network.

The Apps that are needed for your office can be virtualized using software like Microsoft App-V, so that a user can use the app without logging into a whole new OS. This is enable profile vitalization. So, that every user can save their own settings and user data separately.

This infrastructure can also be implemented in a much higher scale. But the basics are the same.

An independent research conducted by VMware showed that a modern virtualization platform with operations management capabilities enables a 67% gain in IT productivity, 36% reduction in application downtime, 30% increase in hardware savings, 26% decrease in time spent troubleshooting 2.

The IT administrator have the administrator rights in the system for all computers, he can assign permission to users in such a way that he can specify only a single or some apps for a user, the user can’t access other apps, he can restrict installation of new software without his permission, he can restrict website, so that no user under that network can access those sites. This might boost productivity. The IT staffs can also supervise all the computers, so no abuse of the network is tolerated.

We can see that all the problems of a normal computing system are completely solved. Now, companies can have a stress free, more productive computing system with less cost and power.

Sakib Sadman Shajib is a student of Notre Dame College, Dhaka. He can be reached at contact@sakibsadmanshajib.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.

Common Misconceptions about Agile Software Development


Nowadays agile philosophy is the de facto philosophy of software development. When we talk about software development, usually we mean agile software development. But a sad matter is that there are lots of misconceptions about agile among us, the software professionals. This article aims at unveiling these misconceptions.

Agile = Scrum

Scrum is the most popular agile software development process. We often use agile and scrum interchangeably. In various seminar and interview sessions, I ask people the difference between these two. Surprisingly, many of them failed to answer. The truth is, agile is a philosophy. There are many processes based on this philosophy, scrum is one of them. There are processes too.

Agile is a MUST

Agile is a proven philosophy. So you can follow this philosophy. But it does not mean that agile is a must for your organization. For small scale projects which can be done in one, two or three releases, you can just do fine without agile. Even this is not a must for large-scale projects too. There are many big companies including Samsung who are doing fine without agile. I personally vouch for agile but it does not mean that agile means always good and suitable for you. This is a matter of choice.

Standard process

As an agile consultant of some companies, the most common requests I face is that I am supposed to design a standard process for them, they will follow that process thoroughly in their organization. Remember, agile is NOT a process. There is no such thing called agile process. Agile is a philosophy, it focuses on people over process, working software over requirement docs and customer collaboration over contract negotiation. Agile has some other attributes too. Based on this nature of agile, there are several software development processes like scrum, TDD, kanban etc. When we talk about some fixed process that “has to be” followed that is NOT agile. The plan-driven approach is NOT agile. In agile, the team is supposed to choose which process they should follow for what tasks. Team is not supposed to be bound to follow some fixed standard process. If team is bound to follow it, then it is NOT agile. Wherever you see the phrases like “Agile Methodology”, “Agile Process” etc, you should understand that they do not understand agile.

Self-organized team

There are lots of misconceptions regarding this term “self-organized team”. Companies hire expensive agile consultants, arrange expensive training sessions and hope that their teams will be self-organized and will not be required to supervise. Though pretty much positive but actually non-realistic thinking. In practical, we see very few teams can reach the self-organized level. So the realistic expectation is to improve the degree of self-organization in a team. And of course, they need to be supervised and led. This leading or supervision is different from traditional meaning, in agile we call it servant leadership or servant supervision- I am not going to describe it as this is not our discussion topic for today. So be realistic and do not mess with the terminologies. If you are in a top position of your organization, your misunderstanding can harm interests of your organization.

Agile estimation

This is another common misconception to “derive a process” to estimate properly. As on waterfall, we cannot estimate correctly, so we think agile will help us to estimate precisely- if you think that, then you do not understand agile. Agile or waterfall or any other xyz cannot give precise estimation. Some processes based on agile suggest some ways to estimate. Through these processes your estimation will be “improved” but not will be “perfect”. After three to five iterations, when your team velocity and skills are determined for this specific project, then your estimation will be improved for later iterations – that’s it, nothing more, nothing less.

Applicable for all scenery

No matter how much popular agile is, this is not applicable for all scenery. To make agile successful, transforming your team is not only enough but also transformation is required for all other departments like finance, admin, management etc. Many a time, the team is not capable of following agile. This is a judgment call. There is no way to forecast whether or not agile is good or bad for your organization. You need to weight in all the parameters and then you have to closely see how it goes.

Increase of team velocity

Many of us think that if agile will increase team velocity up to a great deal. This may be or may not be true for your organization based on various factors. If you suddenly transform to agile, then your team velocity will more likely to be decreased than before for couples of months. Once they are habituated with agile, then it is supposed to be increased than any time before. But this is also dependent on various factors. Agile is a philosophy, this not some black & white process that you can follow blindly. Transformation to agile means more transformation into the psychological state than the physical state. if you can not unlearn your plan-driven mentality, then a mere apparent transformation to agile will bring you no good to mention.

Overnight transformation is possible

Transformation to agile requires time and effort. This is not something that can be achieved through a one day workshop or one-week training program. We can divide transformation to agile in 4 stages. I am not going to discuss these now, just know that to reach the basic level; it is required 9 to 12 months. This may sound strange to you, but this is a fact. For now, you can search it on Google, one day I will write about it.


Usually, misconceptions hurt more than no conception. So we better be careful and clarify our understanding. If you are in a leading position of your organization, then you cannot effort such misconceptions. So before taking any decision, you must have a clear understanding.

Arafat Ibn Sultan Riyadh is a seasoned product and project management professional and an agile coach. Moreover, he is a career counselor and motivational speaker. He is currently serving as the Product Manager at Bdjobs.com Ltd. He is cofounder and Strategic Consultant NWIT and Strategic Consultant Activation Ltd. He can be reached at arafatmist@gmail.com

Share your Idea or article by mailing at editorial@alsew.org with your name, institution and Photo.