20 Nov 2009

Nokia no better than Apple!

I always thrashed Apple for their crippled iPhone. You need iTune to sync your files, you can't multi-task, you can install app files without app store, you can use attachments in email, you can't see files on your iPhone (explorer style) and so on.

Yet you could do all these if you jail break iPhone. A jail broken iPhone is much more versatile than Apple's crippled out of box iPhone.

So, I was upset and thought ok, I'll buy a Nokia which won't be lame as an iPhone.

How wrong I was!

After buying Nokia 5800, the first really popular touch screen (or I rather say press screen) phone from Nokia - which is often termed as cheaper alternative to iPhone, I was in joyful mood for few days.

But fun started to disappear when I discovered my Symbian smart phone is nearly as crippled as iPhone by Apple.

Let's start from beginning. I bought the phone direct from Nokia and as a result, it came as network unlocked (which is a good start). But my patience ran out when the phone started showing its bug quite ofen. For example, just after unplugging USB cable, the shortcut bars started disappearing. After crawling on internet for a solution, I discovered that you need to hard reset the phone by pressing 3 (green red camera) +1 (power) keys while powering it on. This deleted all my user data!

Then from forums, I also came to know that Nokia could have just allowed users to delete a single system file or a particular folder to solve it (without doing hard reset). But in my phone, I could not even see those files. Why? Because they have been hidden by Nokia! To see them, I need to "hack" or "jail break" my phone - in same way iPhones are hacked!

Then I tried to figure out why Nokia did that. The result was very interesting.

The Symbian foundation is mostly owned by Nokia. An inherent feature of Symbian is its "signed certificate" feature. Any app, which you like to install on your Symbian phone, must be signed by developer's certificate. It is analogous to getting your manager signed his approval whenever you want to access internet at work (imagine how inconvenient it would be).

Apps from big commercial vendors (in Symbian) come with proper signed certificate. So, you can directly install those apps without much drama.

However, problem crops up when you try to install apps from less wealthy developers. I continuously got "certificate expired", "certificate invalid" etc. errors. I got same error even with my own developed Symbian *.sis apps!

To get a signed certificate license, you need to pay a hefty sum to Symbian Foundation (i.e. Nokia).

There is another option though. In Symbian signed website, you can upload an unsigned app with your IMEI number where they will generate signed app for your but that will work only in your phone (as IMEI is embedded in app).

Not only this is bit of inconvenience but it won't sign apps if it detects apps are trying to interfere Nokia's own protection system.

What a money making exercise!

There are some apps using which you can hack your Symbian phone (most models). Most common is HelloOX (just search over internet). This needs to be compiled specifically for your phone (ie IMEI dependent). Unless you can sign it via your own developer's certificate or can read Chinese language website, you need to fork out $5 to get a signed version from their site. Once you get that, you can easily hack your Symbian phone! Then you can install your own apps and no longer get certificate error message. Basically this hack makes you something power user which is analogous to root user in Unix. So, you can see all files (even hidden system files) and bypass certificate errors.

Nokia argues that not everyone should become root as bad apps can muck the phone software to render it unusable.

Well, I counter this by saying, just using Nokia's own firmware, my phone (and several others as you can see on internet) got screwed.

Also, as a developer, I know what I'm doing in my phone. So, why I should not get the flexibility of developing my apps for free?

Symbian is quite a powerful OS for smart phones. Even without hacking your phone, you can access individual files (except hidden system files) using Windows Explorer type file manager. You can use attachments in emails, you can read Microsoft Office files using free Quick Office viewer (editor will cost money though). You can directly transfer music (or any file) treating the phone as USB disk.

So, definitely Nokia did something which Apple didn't. But if they claim Symbian as open system, then they must not close it to users.

29 Oct 2009

Rise and fall of Microsoft


During 1990 to 2000 - following paragraph was relevant. In fact, this is what I wrote in my blog in 2000.
Most people nowadays when buying new computers use Windows as OS. Some of them are even unaware that there are dozens of other operating systems. For example, Apple's Macintosh, IBM OS/2, Linux etc. These are for PCs. For main frames there are AS/400, Windows NT, Unix etc. It's not that Windows is the best of all, but it is really the most popular. Windows graphical user interface is not at all intimidating. In fact, it is the Microsoft's business strategy, which has made Windows #1 OS. Back in the seventies, Microsoft & IBM jointly developed DOS. At that time Microsoft envisages that computer industry will revolve around PCs rather than main frames as thought by IBM. So, in an attempt to dominate PC market Microsoft launched Windows. And it was a quick success. However, Microsoft is not the pioneer in developing GUI. Apple Macintosh developed a GUI OS longer ago when people were using DOS 3.0! Experts believe that MacOS is still superior to Windows. But, Apple made a blunder by trying to say customers what they offer than to listening what they want. Very recently, Apple is selling their most advanced PC without a floppy drive! They have included a CD writer instead. They argue that floppy disk backdated. I think that most potential buyer will stay aside form the product. Another wrong step of Apple's was that they patented their programs as well as their design. But IBM did not. When you buy an IBM compatible computer from market, IBM does not get any royalty. This is perhaps one major reason behind such widespread success of selling PCs. Among all OS Windows is perhaps the most forgiving. The IBM OS/2 may be more powerful than Windows, but it won't run if there is a slightest discrepancy in you computer's hardware.
In 2009, the above scenario has changed a lot!
Today Microsoft is quickly being replaced from #1 software giant on earth. It's now head to head between Microsoft and Google. So far, Microsoft got most its revenue from its ubiquitous operating system, Windows (various flavours) and Office products. At this very moment, both have serious alternatives. Open Office is nearly 90% as good as Microsoft Office. It is free! Linux has become more matured and one can even run it from USB disks directly. This is free too and you are encouraged to copy and distribute. All products, where Microsoft were a pioneer a decade back, has now either free or very cheap similar alternatives. Be it Microsoft Encarta encyclopaedia ( = Wikipedia), Microsoft Encarta Atlas ( = Google Earth), Microsoft Train/Flight Simulator (actually developed by Kuju which Microsoft bought later), Visio (= free flow chart editor) or whatever - you name it and we have a free alternative!
I know you're going to shout that I missed two important candidates, Visual Studio and SQL Server. Yes, Visual Studio does not have any equivalent because it runs only on Microsoft platform! On other hand, Express version of Visual Studios is freely downloadable and perfectly serves amateur developers. SQL Server still has a good market as low cost database with good features but MySQL, PostgreSQL and Oracle lite editions are not far away.
Very recently Microsoft announced that they will launch web editions of Office software. Alas, too little too late. We already have competing products like Google Docs, Zoho office, Editgrid etc. Still, most computers are sold as Windows pre-installed. But as time goes by, we shall see several others of being offered with Linux.
Windows Mobile was launched with much fanfare. Yet, it is now dying smart phone OS - facing stiff competition from open source systems like Symbian, Android, Linux Maemo etc.
Microsoft's search engine, Bing, failed to make a dent in Google or Yahoo's dominance.
Windows Live is a flop! Hotmail users are migrating to Gmail, Yahoo or other places.
Interestingly, Microsoft's arch rival Apple is rather going stronger!
I personally disliked Apple for their closed architectural approach to everything (including iPhone). To use an iPhone to its full potential, one needs to jail break it - which is not supported by Apple. Also, you can't use it without involving iTunes! What a rubbish concept.
Yet, Apple was successful making iPhone make a desirable item to millions around the globe.
Microsoft's Zune never managed to compete with iPod. Windows mobile is struggling where iPhone is flourishing!
I always admired Microsoft. Unlike Apple, they never crippled their software and they encouraged piracy of their product. That's why it their products got such a big market share today. I myself learned computer on DOS (and then Windows) and Visual Basic was first programming language I mastered.
So what did Microsoft do wrong?
They failed to see the power of crowd sourcing. Wiki is today's power. Voluntary collaborative approach gave birth to Linux, Firefox, Wikipedia and much more.
Bill Gates also could not comprehend power of cloud computing. He tried to stick with 1 computer 1 license revenue concept - which is now becoming obsolete.
Microsoft never invested properly to their database business, that's why SQL Server could never become true competitor to Oracle. Microsoft always targeted mass market via Windows and Office. But after World Wide Web, people every where has so much choice.
Windows 7 is a desperate attempt to get some people back to Windows. But again it is going to fail. I now like to see Linux in my computer rather than Windows.

20 Oct 2009

Oracle RAC - tiny intro

Introduction

An RAC database is a clustered database.

There are multiple Oracle instances which are connected via "interconnect" - which is a type of LAN.

The datafiles are stored in several disk drives which are connected by "cluster aware storage".

By adding multiple instances, we can add and remove and single Oracle instance without bringing the database down. So, database always remains available. That is the essence of RAC - high availability.

An RAC database is managed by "Oracle Clusterware". RAC operates in "shared everything" architecture mode.

Any storage (eg. SAN, SCSI etc.) can be used with RAC. However, good I/O speed is required for scalability of storage.

RAC supports up to 100 clusters which may be different at hardware level but must run same operating system!

Oracle recommends using ASM (Automatic Storage Management) for ease of dealing with clustered storage.

As a reminder, the RAC is used for high availability and scalability. When work load grows, you can simply add another server to the grid (RAC is a type of grid computing after all).

RAC can be managed via Oracle Enterprise Manager.

An Oracle RAC database requires three components - cluster nodes (the servers or computers running Oracle instances), shared storage (disk drives) and Oracle Clusterware (software application).

Installing RAC

The first step of working with RAC is to install "Oracle Clusterware" which is installed via Universal Installer.

Then you have to configure the clusterware.

Then install ASM (Automatic Storage Management)

Now install Oracle 11g database

Then perform post installation tasks. This ensures that clusterware and database are installed properly and they are aware of each other.

It is possible to convert your normal single instance Oracle database to an RAC database. You can achieve this via Enterprise Manager or "rconfig" utility.

Administering Clusterware

Oracle Clusterware includes two important components: the voting disk and the OCR.

The voting disk is a file that manages information about node membership, and the OCR is a file that manages cluster and Oracle RAC database configuration information.

RAC can be administered via Oracle Enterprise manager. On EM's web console mode, click on Availability tab to see details of Clusterware. You can click on Topology tab to see a visual represntation of your nodes. The Interconnect tab shows you info on interfaces. You can also add new instance to clusterware via EM (under Server tab).

Oracle Clusterware posts alert messages in alert log - which is under $CRS_home.

RAC data dictionary views are created by catclust.sql.

Cache Fusion

Oracle RAC uses "Cache Fusion" to synchronize the data stored in the buffer cache of each database instance.

Since all computers/instances in an RAC access the same database, the overall system must guarantee the coordination of data changes on different computers such that whenever a computer queries data it receives the current version - even if another computer recently modified that data. Oracle RAC refers to this functionality as Cache Fusion. It involves the ability of AC to "fuse" in-memory data cached physically separately on each computer into a single global cache.

Recommended Reference for further reading:

Oracle Database 2 day + RAC
http://download.oracle.com/docs/cd/B28359_01/rac.111/b28252.pdf

Converting single instance database to RAC

http://www.oracle.com/technology/pub/articles/chan_sing2rac_install.html

Other
http://www.orafaq.com/wiki/RAC_FAQ

3 Oct 2009

iPhone is not a geek phone (yet)

In spite of so much hype about iPhone, I still think it is not for geeks (as yet).


It may be flashy and have a good touch screen interface, but it is limited any too manys ways.

First, is the lack of open architecture. Apple has always been a pet hate of mine for their unwillingness to offer an open platform.


Though Apple is so vocal about their millions of applications available for iPhone hype, most free applications are just crap. Yes, I repeat just crap. Even some paid apps are craps too. Agreed, most iPhone apps don’t cost more (may be £1) however, most of them don’t do any real task either.


Problem is, most iPhone apps are still not matured. Numerious applications are just a front end interface for web based applications (like iPhone Facebook, news readers etc.). Any website, which is designed for mobile browsing, is already offering same feature and functionality. All those apps which can’t be used unless you’re online are pretty much useless IMHO.


My next gripe is the iTune. You can just install an application directly (without using iTune) to iPhone. You can bypass iTune but you must then install it directly from App Store. Give me the option of copying application directly to its memory from computer.


There are not many applications which you can use in iPhone while being off-line. If you are willing to develop iPhone applications and NOT on Mac platform, then you are screwed. Apple arch rival Microsoft is still way ahead with the formers intuitive Visual Studio IDEs. You may argue why Apple should offer good IDEs in Microsoft platform? Well, forget MS platform, why not offer something on free Linux platform?


Yes, you can achieve a lot more things by jail breaking the iPhone. But why do I need to do it in the first place. The product should be sold in unlocked form in a reasonable price.

In some European countries, Apple does sell iPhone in unlocked format!!

8 Sept 2009

What is g in Oracle's grid computing?


Oracle defines Grid Computing like this -
With grid computing, groups of independent, modular hardware and software components can be connected and rejoined on demand to meet the changing needs of businesses.
What does it exactly mean?
We already know what is parallel computing. A complex task is dividided into smaller parts and each part is processed independently by a computer. Then the outputs are combined to get final result.
How does grid computing differ from this parallel computing?
Grid computing is an abstract (or virtualization, as Oracle says) concept. A grid can be an infrastructure grid, an appllication grid, an information grid and so on.
What - getting more confused? Ok, please read on.
In early days (~ 1980s) of database, the relational database management was starting to gain populartity. In RDBMS terms, a customer buys products and a transaction is generated. All these info like customer, product, transaction etc. are stored in RDBMS. They are linked together (by foreign keys, in 3rd normal form). So from a high level view, customer, product, transaction are all part of RDBMS.
Now come to the present days. We are gathering data like never before! Besides RDBMS, we now have lots of different stuffs like OLAP (Business Intelligence/Data Warehouse etc.), OLTP (transactional data), BPEL (Business Process Execution Language), Web services (using XML), OWB (Oracle Warehouse Builder) etc.
The grid is collection of all these - i.e. different applications and data which speak with one another.
Oracle has product of almost everything nowadays - which is wrapped around a buzzy name called Fusion Middleware. These components are designed so that one component (say OWB ) can interact with another (say core Oracle database). This constitute an architectural grid.
The relational data on your RDBMS can be termed as information grid.
There's another aspect of grid. Take Oracle Real Application Cluster or RAC. Here multiple instances of database are inter-connected to safeguard single point of failure. This is an example of infrastructure grid. Everything is part of grid. It is a concept. It could have been told as Matrix computing too. (Did you enjoy Matrix series of movies?)
Now you know what the buzzword Grid computing mean!

What is Oracle Fusion Middleware (OFM) and Service Oriented Architecture (SOA)?


If you are not sure about what are these things and try to have a look at Oracle's website, there is a good chance that you might find yourself confused.
That's quite expected. Oracle's website is for marketing their product. It's not their interest to describe their products in a way that people think it is there is no magic about it!
So, here I try to explain the things in non-geeky terms.
The Fusion Middleware conisists of Oracle's non core-database products - some of which are not actually middleware!
This includes Oracle's Developer suite (Forms & Reports), Java related tools, web logic server, content management etc. OFM depends on open standards such as BPEL, SOAP, XML and JMS.
Oracle SOA is a part of OFM.

What is SOA?

The main essence of SOA is that applications will talk with each other in a language (i.e. data format, process steps) which is understood by all others applications communicating with.
SOA is about reuse.
SOA helps business to move, change, partner and re-invent itself with ease and grace.
SOA extends idea of reuse not only to web services but also with business services.
SOA components are loosely coupled.
SOA can contain web service, BPEL etc.
Web service
Example, you throw a postcode to Yahoo Geocoder and it gives you back latitude, longitude of that post code.
You ask for price of particular item to a website, it suplies you the price.
Usually, web service results are returned in XML format to ensure universal compatibility.
In SOA segment, you will often hear the term "Orchestration". What does it mean?
If you seen an orchestra, you know that conductor just draws some invisible drawings in mid air by moving his magic wand from one side to another. However, musicians can decipher his rhythm and plays their instruments so that every one plays same tune at same pace.
Orchestration in SOA has similar meaning. It ensures that all applications under SOA, know how to be in sync with other applications in the group.
But how orchestration is implemented?
It is done via BPEL or Business Process Execution Language.
BPEL is a tool, using which you can draw how data moves from one application to another. For those who have not used it, it is like a Visio flow chart diagram editor. But, when you draw objects in BPEL, you tell them what to do. You instruct them where to read data from, how to process it and where to send output after processing finished. So, basically it is a graphical tool to define business process. Without BPEL, the whole process will look like thousands of lines of PL/SQL (or Java or C++ or whatever) codes!
Behind the scene BPEL still writes codes - but it just make simpler (!) for any business user to understand and define the process. Now whether that is good or bad is debatable - I am just outlining the concept here.
So, BPEL works to integrate several applications. BPEL usually follows XML standard as interface to several components.
Let us take a bigger example. Your supplies sends you a file which contains all the products, quantities and unit price you ordered for. You need to update your inventory accordingly. Now assume that your supplier quoted the price in € but you need to put that in £ in your database. Using absolute minium technology, you need to write a small program to convert € to £ while loading that data to your system. But if you are using BPEL, you can visually draw the program!

6 Sept 2009

What is best Smart Phone?

I was looking in market for a comprehensive feature rich smart phone. But more I looked, more I got confused!
Although iPhone is probably current market leader, but it has two serious drawbacks - too pricey and comes locked to network! Though it is possible to jailbreak iPhone, it invalidates warranty. Too bad :(
Nokia's N97 is again too pricey. Its user interface is no match to that of iPhone's. Although a separate key board does help a lot.
I am currently an old generation Palm PDA user. So, naturally I keep an eye on its next release Palm Pre model. But I don't think it would be any cheaper.
Nokia E71 looks like Blackberry but I'm not sure if I'd be interested in carry a full size keyboard [which does not hide itself as in N97]. But it is cheaper though screen size is smaller.
This leaves two other candidate - HTC Hero and Blackberry. The former uses Google's Android OS and still have a hefty price tag.
Somehow, I've never been a fan of Blackberry.


22 Aug 2009

Theory of Everything


Fundamental particles and forces

The most common force of nature is gravitational force! Newton's famous 3 equations of motions are based on gravitational force.
This law basically states that two objects in universe attract each other at a force defined by this universal law of gravitation equation:
F = G*m1*m2/ d^2
Where, G = 6.673 x 10-11 Nm2 kg-2
m1 = mass of first object (kg)
m2 = mass of second object (kg)
d = distance between two objects (m)
F = gravitational attraction force (N)
This equation is applicable to objects which are small (few mm) to infinite distance apart.
It pulls matter together, causes you to have a weight, apples to fall from trees, keeps the Moon in its orbit around the Earth, the planets confined in their orbits around the Sun, and binds together galaxies in clusters.
Classical mechanics (or Newtonian physics) laws are applicable to cases where size of particles are larger than atomic level and speed of travel is much lower than that of light.
If you are looking at physics of particles at atomic level, then you need quantum theory. If you are approaching near speed of light, you need Einstein’s theory of relativity.
Example, two cars are coming towards each other at a speed of 30 km/h each their relative speed is 30 + 30 = 60 km/h.
However, if two spaceships are travelling at 270,000 km/s each their relative speed will be found using following formula

u = (v + w)/(1 + v*w/c^2)
Where c is velocity of light, v and w are speeds of two spaceships measured by a 3rd observer. Note that for low values of v & w, the above equation becomes u = v + w.
Atom = nucleus ( = protons + neutrons) + electrons
Electrons have -ve charge while protons have +ve charge.
Since particles with same charge must repulse from each other, there must be a force which holds same +ve charged protons together at nucleus. This force is known as strong nuclear force. This force acts only in a atomic range (10-15 m).
Because the strong force binds nuclear particles so tightly together, huge amounts of energy are released when lightweight nuclei are fused together (fusion reaction) or heavy nuclei are broken apart (fission reaction).. The strong nuclear force interaction is the underlying source of the vast quantities of energy that are liberated by the nuclear reactions that power the stars.
The force for which opposite charged particles (eg. protons & electrons) repels from each other, is known as electro magnetic force.
Which is defined by Coulomb’s equation

F = k1 * k2 / d^2

Electrons are attached to nucleus by this force – thus it holds atoms and molecules together! Like gravitational force, electromagnetic force is applicable to an infinite range.
The electromagnetic force binds [negatively charged] electrons into their orbital shells, around the positively charged nucleus of an atom. This force holds the atoms together.

The electromagnetic force controls the behavior of charged particles and plasmas (a plasma is a mixture of equal numbers of positive ions and negative electrons) as, for example, in solar prominences, coronal loops, flares, and other kinds of solar activity.

The electromagnetic force also governs the emission and absorption of light and other forms of electromagnetic radiation. Light is emitted when a charged particle is accelerated (for example, when an electron passes close to an ion, or interacts with a magnetic field) or when an electron drops down from a higher to a lower energy level of an atom (from an outer to an inner 'orbit' around the atomic nucleus).
The weak nuclear force causes the radioactive decay of certain particular atomic nuclei. In particular, this force governs the process called beta decay whereby a neutron breaks up spontaneously into a proton, and electron and an antineutrino. If a neutron within an atomic nucleus decays in this way, the nucleus emits an electron (otherwise known as a beta particle) and the neutron transforms into a proton. This increases (by one) the number of protons in that nucleus, thereby changing its atomic number and transforming it into the nucleus of a different chemical element. This force is applicable to atomic (10-17 m) range only.

The weak force is responsible for synthesizing different chemical elements in stars and in supernova explosions, through processes involving the capture and decay of neutrons.

When confined within a stable (non-radioactive) atomic nucleus, a neutron is a stable, long-lived particle. Once removed from an atomic nucleus, a free neutron will undergo beta decay, typically in about twenty minutes. The reverse process of beta decay occurs in the collapsing cores of supernovae, where protons and neutrons are fused together to create the vast numbers of neutrons that populate the end product of the collapse - a neutron star.
It has been proved recently that electromagnetic force and weak nuclear force are basically same force – called electro weak force.
Before we move on, there are some other theories we need to learn. One among that is
quantum mechanics. The key idea here is that the smaller the scale at which you look at the world, the more random things become. Heisenberg's uncertainty principle is a famous example of this. The principle states that when you consider a moving particle, for example an electron orbiting the nucleus of an atom, you can never ever measure both its position and its momentum as accurately as you like. Looking at space at a minuscule scale may allow you to measure position with a lot of accuracy, but there won't be much you can say about momentum. This isn't because your measuring instruments are imprecise. There simply isn't a "true" value of momentum, but a whole range of values that the momentum can take, each with a certain probability. In short, there is randomness. This randomness appears when we look at particles at a small enough scale. The smaller one looks, the more random things become!
A major difference between quantum mechanics and classical mechanics is that law of physics depends on sizes of particles in very small scale dimensions. At very small scale (atomic level) the classical laws of physics no longer work. That is the place where quantum mechanics laws will apply.

The theory of everything

We now aim to define earlier fundamental forces by a single theory! That’s where string theory comes.
Theory of everything = to combine, quantum mechanics + relativity + classical mechanics
Classical physics assumes that a particle is the smallest existence. String theory asserts that the fundamental building blocks of nature are not like points, but like strings: they have extension, in other words they have length. And that length dictates the smallest scale at which we can see the world. The estimated size of these strings is 10-34 m.
The mathematics behind string theory is very complex! Our conventional physics so far assumed only 4 dimensions – namely length, width, height and time. But string theory, in its various forms, assume dimension of 10, 11 or even 26. However, it is stated that most dimensions actually exists only in atomic range distance.
Strings can vibrate. In fact they can vibrate in an infinite number of different ways. This suggests that the different particles and forces are just the fundamental strings vibrating in a multitude of different ways.
In a nutshell, the string theory gives us an exciting vision of nature as miniscule bits of vibrating strings in a space with hidden curled-up dimensions.

31 Jul 2009

Why offshoring works

Contrary to the popular believe in developed world, the outsourcing actually works quite well.
Let's face it - if all companies were worse of after outsourcing, then the practice would have died long time ago. But it didn't. In fact, now even more companies are outsourcing their work to offshore companies (most to India and other parts of Asia). Nowadays, not just IT but every other sector like law, education, engineering etc. being outsource regularly.
Since I am on IT sector, I can only vouch what is happening in this segment.
The traditional IT jobs in UK are very much based on imposed responsibilities. It means, employees are designated to work particular task – like developer will only develop the code, tester will only test, support personnel will just execute (without much thinking) and Business Analysts will only look at Excel charts. Moreover, managers won't have a clue what their development team is working and System Architects will have no idea of what it feels like working in a particular technology.
Most UK IT firms refuse to accept the fact that IT sector works based on role based responsibilities rather than designated based ones!
In Indian IT companies, same person invariably works like developer, designer or even in project manager roles. Not only it helps to understand issues faced by other team members, it also avoids boredom.
Support jobs are always boring. Testing is like necessary evil – not many people want to do it yet it is absolute critical for any application to undergo rigorous testing. I hate the typical project manager's question – how much percentage of the work is actually completed. Gosh, when will they understand that it is not a linear relationship? For first few months, there may be only 10% of work done. But in last month alone 90% work can be completed. How is that possible? Quite easy. When a project begins, lots of questions are left unanswered. The application is developed with voids in between. Now that application is as such worthless as it can't be put on practice for its desired role. But, when the voids are fixed, then application becomes fit for purpose. Till it happens, it is very difficult to give a percentage completion report.
I remember, one of my former project manager stopped asking me this question when I replied – it is just 43.3976% done and progressing on a logarithmic scale.
Ok, coming back to our off-shoring topic. So far we covered boredom and role rotation philosophy. But that is not all.
In India, most development work is done by fresh graduates (out of universities). They are quite happy to work at lesser salary (even on local standard compared to experienced people there) but with not unacceptable quality – because most of them are quite eager to show their expertise in programming logic.
However, on western economies, since people tend to do same work for decades, they usually attract higher salaries (local standard) with out much quality improvement. One reason here is that you can't discriminated based on age – which is quite the opposite in outsourced countries.
Since off shoring companies employ fresh candidates without real life experience, there is always a chance that they will produce junk output. However, this is taken care by strict adherence to standards. It is shame that many UK companies just don't follow any standards at all – almost everything is ad hoc!
The managers probably here don't recognize that just by implementing rigorous standards alone, a lot of time and money can be saved.
Standard implementation is complemented via documentation. Since people working in project are separated by several hours of difference time zone and no face to face interaction is possible – the onsite system architects have to be extra cautious to avoid any ambiguity in design documents.
Yes, sometimes it means spoon feeding everything to offshore team. But it works in longer run. The design document becomes so much fool proof that anyone can just blindly follow them and get things done.
Of course, there is cost advantage as well. But what I discussed here are just some other aspects of off shoring.
I shall continue this discussion later.

9 Jul 2009

How Google makes money?

I know, you will jump and say from advertisement, isn't it.
Yes, they make most of their revenue from advertisement - nothing special about it. But the question is how do they do it. Is it all fair for rest of us i.e. netizens?
You are already aware that when you search anything in Google, a colored horizontal and vertical (on right) column are displayed which says sponsored links. Obviously Google does take money from those companies for displaying their ads in web result. But users are clever and they rarely click on those links. Just ask yourself, how often do you click on those links?
But people, although the number is very small - approximately 1 click on such ads for every 1000 page visits! However, Google pages are accessed by millions (if not billions) of visitors every day. So, for Google, even 0.1% such clicks do bring huge amount of revenue. But sometimes Google does behave unethically. You can be a member of Google's adsense and put ads on your own blog (like this) and Google promises to send you money when user clicks ads on your page. But the catch is, you won't get payment until your account accumulates $100. For a small website, it will take years to reach that stage. So, all these years, Google will get revenue from advertisers but won't pay you anything for displaying them in your page. No worries, after you reach $100, you'll get the money - isn't it? Well, here lies the ugly bit. Google can (and they do) terminate your adsense account at any time if they think clicks are not legitimate. Still it seems reasonable, as you are not supposed to click on your own ads (i.e. they can track from which IP ads were clicked etc.) But even then, Google can terminate your account at anytime without giving you a reason at all. Well, they do - they did it with me. Then I searched on the net whether other people have similar experience and yes, they have done it with millions of people! They claim they would return money to advertisers but how do you know they have done it in practice?
Not very nice, is it?
The story does not end here. Every time you search anything or use Gmail, or Google Toolbar etc., Google stores information of your activity. There is no time limit how long Google will store that information. They can store it forever. Then they sell these information to 3rd parties for market analysis (does the word Google Analytics ring bell?).
Google bots scan your emails (if bots can do it, any one in Google office can do as well), it displays some ads on your inbox title bar based on what you typed in your email. Google's Chrome collects all bits of your web browsing info (that's why I don't use it but I still use Gmail though).
Google never discloses what algorithm it uses to check invalid clicks adsense account, how to rank websites, how it uses user net surfing info etc. In a nutshell, Google is never responsible to anyone for its activity.
Are you now afraid of searching anything using Google?
I am sure you will now ask me if I am recommending not to use Google at all. Well, it's your choice really. All I suggest that you don't put all your eggs in one basket. Always use some alternative emails, alternative search engines etc. That's it.

29 Jun 2009

How online feedback affecting business

How do you search for a product? The product may be your next car, enterprise database in your organization or finding best re-mortgage rate. Chances are (well, I'm pretty confident) that you search for best deals on internet.
You are most likely to scan three types of sites - viz
1. Each vendors' own website

2. A comparison site (often known as screen-scrappers like money supermarket, froogle etc.)
3. The feedback written by general public (either online or anecdotes from your friends, colleagues etc.)
In pre-internet era (say before 2000 when internet was still few and far between), online feedback sites were rare and so was comparion sites. So, consumers could only gather information from respective company's websites and whatever they claimed in traditional TV, radio or newspaper ads.
The obvious problem was to validate their claims. If they said "according to XYZ survey conducted..." you really not have much scope of validating this claim. You don't really have enough time/resource to find out the research report from library journals.
But now, thanks to online forums, companies don't have space to hide. The best example, is to select a car. Almost all manufacturers have user forums on internet. Before you actually test drive a car, you have access to all the things that went wrong with particular model. So what? Does it prevent anyone buying a lemon or selecting wrong mortgage?

No - of course it does not completely eliminate bad selection, but no doubt it makes consumer a lot more wiser and apprehensive of what to look for.

Companies often claim that for every bad review, there are hundreds of satisfied customers. Yes, it is quite true on some extent. Often consumers won't praise a product often as they criticise it when things to wrong. So, no review of a product can have two meaning - either everyone using it was very satisfied or it sold so few that even those with bad experience, didn't bother writing their feedback online.
What is interesting is that, even today, many big companies are not really taking online feedback seriously. A well known case was "Dell Hell" (search for it on internet). Basically few years back Dell started selling crap laptops and people flooded the web with poor consumer experience. That seriously did a dent on Dell's revenue. Finally companied take steps to raise their quality and pacify the online rants of users.
Now the question which begs an answer is that why companies are still ignoring online citicism from users? It depends. If the company's primary user base is old people - who are not net savvy, they know they won't hit hard as mainly younger generations are glued to net. Many people, often confuse anectodal evidence (ie. my friends car worked fine for 10 years) with statistical evidence (eg. 500 models of same car had a serious breakdown in first year).
There are some agencies, who actually work professionally to improve online reputation of many companies! So, if company A sees that complaints about their product is flooding the net, then they ask for these specialists to counteract the issue. These firms then try to dilute negative feedback with loads of artificial positive reviews. They even employ tactics so that search engines brings up good feedbacks first.
It's a cat and mouse game.

19 Jun 2009

Cost of ETL tools


Did you recall the cost of databases chapter? Well, getting price of ETL tools even harder work.

There was not a single ETL vendor (other than open source ones) who feature their price on their website.

If you ring them up for their price, they will ask you so many questions you will be confused more than ever (it is as irritating as answering questions while you apply for your car insurance).

Some ETL vendors are very snob! They may refuse to sell you their product unless they think your organization is qualified enough to use their product! This is definitely not a problem for multinational enterprise companies but it is indeed a problem for small/medium enterprises that don’t have that much budget or exposure. It will be interesting to observe if recent economic downturn would have changed their outlook (nowadays they sometimes do give freebies like an extra product thrown with no cost).

All I can tell you that major ETL software (Ab Initio, Informatica, Data Stage) licenses do cost in the region of ~ $500,000 for a typical large organization.
The table below gives you an approximate idea of cost.

ETL
Price
Demo/ Try before buy
Ab Initio
$ 250,000 for Co-operating system per CPU
$ 10,000 per GDE
$ 500,000 for EME
now their price is based on Specint (faster processor = you pay more)
No
Informatica
?
+ mandatory database cost for repository
Yes, you can download working version from Oracle’s site (yes Oracle’s)
Data Stage
$ 55,000 - $ 110,000 per CPU
?
Oracle Warehouse Builder
?
Yes, you can download working version from Oracle’s site
Talend Open Studio
$ 10,000 per year per user
Yes
Expressor
Depends on channel which is a confusing concept introduced by them. See their website.
No

31 May 2009

Photography - basic techniques revisited

In my school days I started to use traditional manual film camera. But after the invention of digital cameras, I chucked away my manual camera and plunged into point & shoot mode. Over a decade, I never thought of going back to manual camera as I don't have patient to adjust every time. However, when my compact camera finally gave up its soul, I had look for another one. I thought of buying a Digital SLR. But money was a constraint! So, finally settled down for a semi automatic digital camera (also known as Bridge camera).

As I was revising my long forgotten camera knowledge, thought of writing it down here in concise form so that I can refer it in future.

Film speed = measured in numbers like 50, 64, 80, 100, 200, 400, 1600, 3200, 6400. Lower the number, less grain/noise captured in photo (which is good). You should always try to shoot images in lowest ISO speed possible for best quality of image. However, higher ISO films are most sensitive to low light and so you need to use higher film speeds (200 and above) in indoor or low light conditions. Outdoor photos in sunny days should be shot at 100 ISO or lower.

Aperture = the measurement of opening of the lens. Measured in F#. Higher F# means smaller opening and vice versa.
For example, F1.8 will allow more light (= more exposure) to enter on camera compared to that of F16.

Shutter speed = duration of lens opening. Longer duration means more exposure ( = more light) and vice versa. Measured in fraction of second. 1/60 shutter speed means shutter will open for 1/60th of a second.

Aperture and shutter speed relationship as followed in most compact digital cameras. [Not entirely sure about this tabulation - I need to double check]

F1.8 F2.8 F4 F5.6 F8 F11 F16
1/250 1/125 1/60 1/30 1/15 1/8 1/4

Shutter speed range - 4/1 to 1/2000 s

Large aperture = smaller F# = more light = shallow depth of field (subject is nearer) = only subject in focus = portrait

Smaller aperture = larger F# = less light = distant depth of field (subject is far away) = everything in focus = landscape

Aperture priority mode = you set aperture value and camera decides appropriate shutter speed
Shutter speed priority mode = you set shutter speed value and camera decides appropriate aperture
Manual mode = you select both aperture and shutter speed

Lens focal length

Compared to traditional 35 mm cameras, lens of 28 mm focal length or below is considered in wide angle territory. Lens below 20 mm is very good wide angle. From 11 to 18, it is considered ultra wide angle. At extreme, 6 mm lens is known as fish eye. Very wide angle lens do show barrel distortion.

Standard fit lens of most SLR cameras vary from 10 to 60 mm. So, most of them have built in wide angle lens. They use separate lenses for telephoto/zoom.

CCD sensor - This is equivalent of film in digital cameras. Larger the CCD sensor size, more details can be captured. Most digital compact cameras has small CCD sensor (4x6 mm). The DSLR cameras have much bigger CCD sensor size (~ 16x24 mm) for which they can capture rich vividly colored images. CCD means Charged Couple Device. Traditional 35 mm films have size of 24x36 mm. Larger CCD is more expensive. Sensor size difference is the fundamental reason why DSLR picture quality is far better than those taken by digital compact cameras.

Shooting techniques

Panorama

All images, using which you will stitch panorama, must have uniform exposure. Otherwise they will look awkward (with different brightness in final panorama) or you painstaking photo-editing to adjust exposure in your computer. Any decent panorama sticher software will be able to stitch images to generate stunning looking panorama.

To ensure uniform exposure, you must lock exposures between images (if your camera allows this). Otherwise, if you camera has panorama assist mode, it will do this for you by itself. If you camera has neither, they you can take panoramas only in bright sunlight (without any shades) where your camera is unlikely to vary exposure between shots.

Landscape

For landcape shooting in bright sunlight, use higher F# (over F5 possibly F11 or F16) and lower shutter speed. Subject is considered at infinite length for this shot. Keep flash off. Use lowest film speed possible. When you select Landscape mode in compact cameras, internally they adjust settings like this. If sky is overcast, use a slower shutter speed and/or larger aperture (= lower F#).

Portrait

Subject should be in focus. Here you have shallow depth of field, so use lower F# like F2.7. Adjust film and shutter speed depending on where you shoot in outside or indoor. For night indoor, you might need to use flash.

Moving objects

Since the object is moving (eg. moving car or athlete), you need to use a high value of shutter speed (typically 1/250 or faster depending on speed of object). Many compact cameras have sport mode for this shots. Some cameras offer burst mode - where you can press the shutter button for sometime and camera takes photos continuously.

Environmental factors

The best photos are shot in bright sunlight! In fact, even a very cheap camera takes brilliant shots in sun. The capability of camera shows up during overcast days, indoor shots and super zoom (macro) levels.

Mega Pixel (and myth)

All cameras now advertised with megapixel values. Does a higher megapixel means better image? A megapixel is calculated as = (# of horizontal pixels) x (# of vertical pixels) / (1024 x 1024).

So, an image size of 2048 x 1536 / (1024x1024) is 3 megapixels.

640x480 = 0.3 MP = VGA quality
3624 x 2448 = 7.6 MP etc.

All the pixels are constrained by CCD sensor size. For most digital compact cameras, the sensor size is quite small. For same sensor size, an 8-MP camera will have more number of smaller pixels compared to a 3-MP camera - but distributed over same area! If you peek your nose over a television screen, you will see lots of tiny dots. These are pixels. More megapixel means smaller dots. If the color changes between neighboring dots, smaller the dots are, smoother the changes will appear. That's the story behind higher megapixels. However, a true TV viewing experience does not always depend on resolution (MP for camera) but also on how big the screen is (say 42 inch screen against 26 inch LCD). The screen size is equivalent to CCD sensor size!

Now see the difference, when people talk about TVs, they measure screen size but when talking of cameras, they don't talk sensor size but number of pixels! Otherwise how would camera manufacturers make you believe more MP means better?

Thus the correct comparison for quality of image between two cameras is this = square root of (higher MP / lower MP).

For a 3-MP and a 6-MP camera, quality difference comes to = sqrt(6/3) = 1.41. So, one is only 41% better than other not the 200% as media make you believe! Buyers beware!

Sample specification of the Fujifilm S8000fd camera, which I now use

ISO film speed - 64/80/100 to 6400 in double increment
Aperture range - F2.8 to F8
Shutter speed range - 4/1 to 1/2000 s
Lens range (35 mm equivalent) - 27 to 486 mm

30 May 2009

Do you really need an ETL tool?

Most ETL tools are quite expensive (especially those which actually work in very large DWH systems).

Many small and medium size organizations perform ETL operations via regular programming languages viz. C++ or PL/SQL.

White it may not be the most efficient and productive way of ETL processing, we must remember the cost benefit analysis.

In a project where source and target databases are both the same (say Oracle), it makes perfect sense to carry out moderate ETL processing via PL/SQL alone as long as you get acceptable performance out of it.

The requirement of a full scale ETL tool most felt in a heterogeneous environment.

The main disadvantage of using traditional programming languages for ETL work is the performance. Programming languages tend process data in loops. For small data volume it is fine. However, looping technique is not very performing for large data sets. Yes, it is possible to tailor made your procedural language code to take advantage of parallelism (tormenting in C++ but bearable in PL/SQL) but in reality it is as involving as writing an ETL tool yourself and thus not really rewarding experience (unless you can market your tool).

In ETL tools, you don’t have to do anything (except keeping your data organized across partitions etc.) to gain performance advantage. They are designed to take parallelism etc. in their execution plan.

But in procedural language, you are of your own. One way to improve performance in PL/SQL (or similar) is to write cursors less often and use set operations more often. Most databases are designed to use best parallelism available when you use set operations.

Don’t forget cost of recruiting personnel in your chosen ETL tool. Depending on market condition, experts in one ETL tool might demand more compensation than other one. You also need to budget for organizing training for your existing staffs.

Most multinational companies can easily afford the best tool sets. However, for medium sized firms, budget is always a constraint. So, they are more prone to in-house development rather than buying tool set of the shelf.

20 May 2009

How internet & Ebooks changing the way we read books

Ok, let's admit - internet and Ebook have revolutionized the way we read books. Of course, lot of people (including myself) still read traditional books but we are reading more and more books/magazines/newspapers on the screen!

I still read non-technical books in conventional book format, but prefer to read the technical books (mostly computer related because of my profession) on internet or Ebook format. The obvious advantage is that I can copy and paste any commands on my application instantly and check the benefit.

Sites like www.scribd.com or www.lulu.com (and many similar sites) have indeed changed the landscape of book writing and publication.

I believe the profit from publishing book or royalty from the sales are gradually diminishing over the years. In the past, the only way to read a book was to either buy it or borrow from library. But now, a lot of books are either available free of charge on internet or they cost a minuscule amount.

I am not arguing with the fact that sometimes copyrighted versions of some books are available on the internet which is illegal, but the bottom line is you can still download a very good book at no cost to you.

Most Ebook readers are still expensive. However, you can read ebooks on PDF format in most PDAs and small net book computers. In near future, the Ebook readers' price will only go down. More and more authors will publish their books in ebook form. Some ebooks are in proprietary format. However, as the ebooks will gain more popularity, there will be some hackers who will always make things breakable (again I am not going in the debate of how un-ethical is that).


Gone the days when people used to consult giant technical manuals when something did not work in software. Now people search on internet and unless the problem is too difficult or uncommon, usually within few minutes a solution can be found and problem is solved.


Even if problem is not found on the web search, you can often raise it on discussion forums and people all over the world in your profession will browse and try to give you an answer. This is like having access to lots of technical experts at your finger tip. I admit that sometimes you do get crap suggestions for your problem, but most of the time, the tricks do work and people really help each other.


Perhaps the funniest part is in newspapers! Gone the days when I avidly read first page on your favourite newspaper. Now you already know what will be on the first page - as you have seen the breaking news on previous evenings new sites! This is one reason why newspaper circulation in developed world is dwindling down every year. Another reason is obviously, the high cost of buying a newspaper. £1.20 a newspaper a day is not cheap!

From the consumer's perspective, though, there is a huge difference between cheap and free. Give a product away and it can go viral. Charge a single cent for it and you're in an entirely different business, one of clawing and scratching for every customer. The psychology of "free" is powerful indeed, as any marketer will tell you. From C. Andersen's blog - the author of Long Tail theory. However, I do wonder whether the author is willing to give away his next title "Free" for free.


We shall see how rise of "freeconomics" and "freemium" products continue to affect our lives.

29 Apr 2009

Best USB Linux distros

I have used almost all Linux distros on the market! I liked some of them, hated the rest of them. Majority of users who still do not use Linux, they actually hesitate to install Linux on their computers' hard disks so that it does not interere with Windows.

So, I reckon a good way to start Linux is to try it without touching your existing Windows hard disks.
Of course, you can run a lot of Linux distros directly from live CD/DVD. But not only this is slower but also you often don't get any option to save your settings.

Fortunately, a lot of Linux distros offer the facility to create live bootable USB disk! You can simply boot from your USB disk [as long as your PC BIOS supports that] and then you will work like normal. You can even save your files on USB disk.

One thing you must remember though, Linux uses a different file system known as ext2 or ext3 compared to NTFS (and FAT) for Windows. Linux can read (but may not write) your Windows disks but not vice versa (without using special software)!

The new Linux users mostly confused by so many distros available. Which one is best?

First of all, you must select a distro which supports your hardwire fully. A lot of Linux distros don't support many WiFi cards and so if you use them, you need to connect to internet via ethernet cable only!

Here is a list of what I found about most popular distros.

Ubuntu - The version 8 onwards offers you to create a USB disk once you have booted with live CD.
Ubuntu is most popular distro now and it has excellent user community. However, its only problems are poor hardware support (especially non Intel based WiFi cards) and rather slow startup time.

Open SuSE - A very competent distro. Very good hardware recognition. However, I couldn't make it work from USB following guidelines.

Fedora (formerly Redhat Linux) - My personal favorite. It's hardware recognition is very good and it's lightening fast! My Windows Vista boots up from hard disk in 45 seconds. Fedora 10 boots up from USB in same time!
It is also easy to install new applications (eg. Open Office) in Fedora.

Mandriva - Another very good distro. Lots of built in apps. You can run from USB disk. But it too didn't like my WiFi card.

Knoppix - Its live DVD comes with most number of built in applications. But I was unsuccessful running Knoppix 6 from USB disk. Even if you run it from live DVD, you can still save your settings (persistent storage) in USB disk.

Puppy Linux - a small frill free Linux. At one time in past it was my favorite distro. But now, there are better alternatives available. A version of puppy comes with Open Office as well.

Damn Small Linux - Just 50 MB in size. But too small for modern day computing.

gOS - Based on Ubuntu, comes with lots of Google gadgets. Nothing special though.

Other distros - Not used yet.

Remember, if you Windows computer is screwed, you can still access most of your files if you boot up your computer with Linux.

The site www.pendrivelinux.com has list of USB Linux versions with step by step guide of how to create them.

Feel free to try all of them and see what you like. My personal suggestion will be - Ubuntu, Fedora and SuSE though as they offer best balance between features and usability.

In Linux world, you often hear the terms like Gnome and KDE. These are just two different GUIs for desktop (like Start menu/taskbar in Windows). You will also come accross as Debian and RPM packages. This is just a way of distributing applications in Linux. Some distros are Debian based (eg. Ubuntu) and some are RPM based (eg Fedora).

22 Apr 2009

How my mobile phone works like iPod Nano?


I thought of buying an iPod Nano. But after I examined iPod’s features, I discovered that my Nokia 6300 mobile can do almost everything an iPod Nano does!
The iPod Nano’s display size and resolution is exactly same as that of my Nokia 6300’s
My Nokia can play MP3 directly and 3GP video files. I can convert any MPEG, WMV, MOV, AVI, FLV (You Tube’s) etc. video files using loads of free converter available on internet.
Feature
iPod Nano
Nokia 6300
Screen
320x240 [4 cm x3 cm]
320x240 [4 cm x3 cm]
Music Playback
AAC
MP3, WMA
Video Playback
MOV
3GP
Interface
Via iTune only
As USB disk via cable or Bluetooth
Portrait/Landscape mode
Possible by tilting iPod
Possible via buttons
Built in Accelerometer
Yes
No (but not an issue)
Adding extra application
Only Apple’s application will work
Any Java (jar) file will work in the phone
Replacing battery
You need soldering tool to replace battery
Just take out the old one and put new battery inside
Extended memory
No way to put additional memory in iPod Nano
You can add micro SD card of size up to 2 GB
Built in speaker
None – you listen via microphone plugged into your ears only
Yes and of good quality sound
Built in camera
No
Yes – 2 mega pixels
Built in radio
No
Yes
Can be used as phone?
No
Yes (it is a phone!)
Price (mid 2009)
£100
£70
If you are thinking of buying an iPod (Nano or Classic), my advise will be hold on. There is a high possibility that your current mobile phone has all the facilities of an iPod. Although I do understand that majority of people buy iPod just because everyone else have it and it looks cool.
Nokia 6300 is quite an old model. But even that phone can compete with iPod. So, I am sure newer mobile phones even have more capabilities than an iPod.
Now I might have saved you few $£€ etc