Monday, August 1, 2011

Robustness and Redundancy in System

Today I would like to tackle the issue of robustness of systems. We all know that when we design systems the perception of quality of such a system is determined by a few elements, some of them would include:

1) The ability of the system to carry out ACID (atomic, consistent, isolated and durability)
2) The ability of a system to recover from errors
3) The ability of a system to detect inaccuracies and inconsistencies in the transactions executed on it.
4) The ability of a system to work in a consistent manner over finite intervals of time and repetitively exhibit the same behavior consistently through out.

Based on the type of system, the intended use etc. Different systems will have a multitude of criteria it has to satisfy for the system to be called robust.

A simple example would be tires, we expect the tires of the car to go flat, hence every car comes equipped with a spare wheel and a set of tools to change the tire. Hence, though there is an inherent weakness in the system (A car) we try to overcome this weakness via redundancy.

But the same tires when used in an airplane has a totally different functionality it has to work without fail time and again, its compounds, the stress testing etc. are totally different.

When we examine data systems we always notice that from an architects point of view performance and security always drive the system design, the robustness is a responsibility that is thrust upon the underlying platform. This means that the systems are secure, reliable and has a certain optimal level of performance but not robust.

The difference between robust and reliable is subtle. A robust system is one that can perform without errors time and again, reliable systems just state that they can recover from any error and maintain the integrity of data.

The question which needs to be asked is: Do we need to add an element of redundancy into our systems to ensure that systems can be made robust?

For example should we create a local cache that records and users transaction and then updates a master database? so that in an event of a crash of any particular application the user is automatically redirected to the place where he was when the transaction was first committed?
Should such cache's be stored in a system environment rather than an application environment? What happens when system's itself crash?

The answers to these questions lie in two aspects:
1) How hard does the shoe pinch? Lost transactions are not new to us, we will be entering some data in a form and all of sudden boom we experience a crash, but somehow since PCs have become popular they have done so on the Windows platform. People from the earliest users of PCs have developed an tolerance towards crashes and lost system data, they like car drivers have spare tires (back up hard disks etc.) and a tire changing kit (Window copies for formatting and re-installation.). Thus, for most users making a computer robust has not been a requirement which can supersede ease of use, cost of the software etc.

2) Monopoly: Windows has been the single largest OS in use since the PCs started adorning our homes. No OS has even gotten close to getting a sizable share of the market as Windows does, it has around 98% of the market share making it an outright monopolizer of the PC market. One of the outstanding feature of this OS is that it is very easy to use, and has visual aids that will make even Homer Simpson a power user of his PC. These features may not be present in Linux but on the other hand it is highly secure, robust and doesn't let its users down, but even then only 2% of the users prefer Linux, thus proving that most PC users don't rate robustness as a feature of importance

Newer operating systems like chrome OS are the reprieve for people who need a robust operating system. Simple, fast, easy to use and having in browser apps all these make it one of the best OS systems that will be much much more robust than Windows. Google is a company that believes in robust transactions, Google docs, Blogger all these are highly reliable and better performing system than any we have seen like even 5-7 years ago. You keep typing in blogger and every now and then your data is saved and whats even cool you never loose your data due to any system malfunction.

The secret behind this type of robustness is redundancy, we all know that our data is not being stored on a single web server anywhere, it is stored on a cluster of servers, probably virtualized and then backed up in another location.

Cloud computing and virtualization with fail-over clustering are other examples where enterprises can benefit from multiple systems providing same functionality. But these systems would be limited by the robustness of client systems that are connected to them if a distributed application environment is being used. Only when used as in-browser or terminal applications can these systems become robust

Thus, when we analyze systems on a whole we have to summarize that our current technology permits systems to be robust only when there is an element of redundancy, hence, the fundamental question, is robustness and redundancy two faces of the same coin? Can't we have system work in a robust manner with no redundancy? Currently it looks like the more redundant a systems functions the more robust it is.

Friday, July 1, 2011

DRS-ICC-BCCI-The Mire

Finally a conclusion has come, BCCI and the ICC agreed to the use of DRS or the Decision Review System for all matches, but they have decided to make the use of ball tracking technology optional. So now, with all the other member nations of ICC supporting the use of ball tracking technology and only BCCI opposing its use we will have two type of DRS used in international matches one with ball tracking and one without.

This leads to many questions: 1) The rules of the game should be uniform to all the playing nations, is it now? Does BCCI have a privilege because it is one of the richest boards of the ICC? 2) If a batsman is out LBW without ball tracking and out LBW with ball tracking will it be considered a same kind of dismissal? Funny as it may sound this is how the future of the game is going to be.

Personally I feel that the rules of the game have to be uniform, just because Tendulkar and Dhoni oppose the use of ball tracking it doesn't mean that the rules must just be changed for India. What the BCCI and ICC have done is exactly opposite to one of the basic tenets of sporting philosophy: Provide a level playing field. The BCCI has bullied their way to this.

So does this mean that BCCI are completely wrong? Is a batsman who has spent 20 years in cricket flawed in his thoughts about ball tracking?? Well to answer these questions let us take a look at the ball tracking technology. First, who provides this technology? HawkEye and BallTrack. Now we all know for sure that with just two providers for the technology the financial angle of using this technology is going to be critical. As it always has been in the case of corporates when technology is proprietary or niche there is always an oligopoly involved, read Pepsi and Coke. The rates of both the soft drinks go up and down together. So, theoretically this solution is not financially stable. ICC cannot make this mandatory on member nations as it cannot guarantee a fixed price and inflation percentages.

The next and the most important thing which most of the experts have missed out here is that there is no standards to evaluate the technology. So we are at the mercy of the technology providers to tell us where the ball is going. Here is an interesting question, suppose in a match both HawkEye and BallTrack are used and HawkEye says that the ball is going to hit middle stump and BallTrack says its going to miss leg stump, whose advice should the third umpire take? This is where the ICC's technical committee should take a stance and define a set of basic standards for ball tracking technology, there should be some baseline established for the techonolgy. The providers should adhere to the given standards and develop technology. Till then as Niranjan Shah put it: "It is one person's imagination Vs the umpire's imagination" I feel he meant judgement.

For me ball tracking should just be used to show the path of the ball from where it pitched and the point of impact on the batsman's pads. The 2.5 meter rule, the predicted path etc. should be removed. Technolgy should be used to aid and assist with judgement not forecast the turn of events for umpires.

Sunday, April 26, 2009

Organic Computers

Welcome to OLEDs and AMOLEDs! the era where organics and electronics have started merging is here. Long ago in the early 90s when Robin Cook wrote the medical thriller titled "Brain" he suggested the use of the human brain as the processor for a computer, this he believed would be the ultimate artificial intelligence. It might have been too progressive then, but today it looks like organic computing is going to be a reality in the near future. There is already deicated research on this front, the most notable is SPP1183 of the Greman Research Foundation (DFG). Also a lot of papers have been published on this topic by researchers, the most important being Christian Muller-Schloer's "On feasibility of controlled emergence".



Let me examine this domain of computing in more detail, lets see the need for organic computing, the advantages and the disadvantages.



The need: The electronic devices have a limit upto which they can be compressed, this limit is goverened by the heat in the electronic circuit. The more we try to compress the circuit the closer the parts would get; and as each part emitts a certain level of heat, when bringing two parts close together the amount of heat they emitt limits their proximity. So, if we brought two circuit components that emitt a lot of heat, like transistors too close the resultant would be the heat each of the tranisistors emitt would add up and cause a hot spot on the circut which could melt the entire circuit. Now a lot of research is being done on semiconductors and inorganic substances to overcome this difficulty. One of the earliest solutions proposed was using light to send signals instead of elecritcity, this was called opto-electronics. However, the cost of such circuits went through the roof as they needed very high quality semiconductors to convert the light signals to electric power which is needed to perform any activity outside the circuit, like turning a relay on or off. Hence, this did not fly too much.

The second solution is seen all around us: organic communication, a small insect like a bee can actually read mega bytes of data about its environment and process it and take corrective action. How many megawatts do you think a bee would consume? a fraction of a watt could be a correct estimate using the Joules to watts conversion. As its energy consumption is very little the heat generated by it is also very very small. Our brain would form the next examplar solution too, we process atleast a trillion instructions per minute ( may be a billion i am not sure of this figure) we store data as much as a million data centres would store, most important of all we store knowledge, which is differentiated from data by its ability to provide instructions to react to any external stimulus. How many watts does our brain consume??? How much heat does our brain disperse? How much cooling and space do we need for our brain.



This makes a strong argument for organic computers. No I dont want your brain as my CPU. All I am saying is a low powered organic processor which could handle tons of data and instructions could be made at a fraction of the cost of a data centre or a supercomputer, could be biodegradable, could be eco friendly, could be very small and consume a 100th of the power todays computers are consuming.



The Potential: The abilities of organic computers are almost endless. We need not have a full organic computer to begin with we could always begin with a screen of OLEDs and a completely inorganic CPU and all other components. Later maybe we can tune our systems to read from organic material hard disks, which could store a 100s of terra bytes and would be of the size of our palm, whats more if disk space gets full? We could add a new organic layer to the same disk and boost its memory, or even better we could ask the disk to grow by itself........ kinda cool huh? Then maybe we could solve the RAM by adding organic RAM, 100TB RAM anyone???



More so today we talk of clusters of computers trying to process millions of instructions, provide us with real time data, render complex graphic videos etc. Think of these in an organic environment we would need only a hand full of servers one of them could be made a kind of captain which would recieve all requests and intelligently decide which server would react to what request. Just like a cricket match where the captain views the conditions of the game and selects his bowlers. This could be the long standing solution to distributed AI over large server systems.



These systems could be flexible and be either highly customized or provide multi-dimentional functionality, like a hot blonde's system can be trained to be higly optimized to get music and video content from the net the other fuctions could be not so fast or optimal, likewise a geeky phsyics scientist could have his computer tuned to meet his physics modelling requirements on the top and other requirements like social networking could be tuned in as secondary functionality.



These are just some examples of its potential, more of these could be quoted but let me keep my blog posting short.



The Social Context: The progress of our race is always limited by the amount of information we are able to interchange. As cavemen we drew cave paintings, as we progressed we invented language, syntax, grammar etc. Today we have the internet that sends billions of KB of data every day to every corner of the earth. Tomorrow if we could make the billions of bytes into trillions or i dont know the numbers above that then imagine the information we could share and the quantum leaps our human race can make in its progress. live 3D streaming, living in a virtual world are just things to start with...... The bounds are endless.



The challenges: Degradation is the biggest challenge with anything organic, as time progresses its functioning will become slow and it will eventually degrade beyond use. So the data that we store could be lost, the processors have to be replaced periodically, the monitors have to be purchased a fresh. That is as of today. We could go ahead a build organic components that could last a life time, that could last may be as long as we live, or we could make organic devices that are growing old to pass on the information and processing capablities to newer younger organic devices.



Homogenous response to stimulii: This is the next biggest challenge in the face of organic computing. As we know different organic elements react in different ways to the same stimulus. like light may attract moths and the same the light may chase away roaches. So a completely organic system would have to be controlled to react in the same manner to different stimulii.



External environment affecting the working of computers. The next time we may not have to worry about a software virus or bug, we may have to worry about real viruses and bacteria that can affect the functioning of our machines, by destroying some ploymer structure etc....



In retrospect: The future of use of organic devices in electronic circuits is only bound to grow. The uses of organic devices can be endless, and we can also have various advantages these devices provide us with. On the flip side a live organic terminator.................... ??