Subscribe: Horizon
http://jroller.com/rss/kshitij
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
application  ibm  linux  long  make  market  music  new  online  people  software  sun  system  technology  time  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Horizon

Horizon



Technology views, thoughts & ideas - by Kshitij Chandan.



Last Build Date: Sat, 28 Jul 2007 13:05:30 -0400

Copyright: Copyright 2007
 



Stuff that I read

Mon, 25 Sep 2006 13:01:11 -0400

I do a lot of reading - both online and offline. I am a lot into Audio Books these days, which I feel is a real good utility when you are commuting to and from work. I thought of listing today, some stuff that I really enjoy reading and also recommend people like me to try them out:

  • Blog: I strongly recommend EMERGIC for anyone who's interested in the business of techonology and the disruptive trends that culminates from it. The site was a real inspiration for me to start my own blog. It's the best aggregator, I feel, for the Indian audience who are starting to understand the blogosphere and what all the online world has to offer.
  • Magazine: Business&Economy is the best ... well... Business and Economy magazine around this part of the world. Its a little opinionated, however I feel it has the best coverage of national and international business and politics, news and reviews. It sometimes is as opinionated as a Blog, but I guess that only adds a bit of spice to the news.
  • Books: I just completed The Long Tail and was just dazzled by the analysis of Chris Anderson. He has researched deeply into the various causes and effects of the online businesses on the economy (or rather economics) and has justified his theory probably three to four times over. I have followed the blog which was always on its way to the making of this book, but I think the flow of the book is much better and much vast making it not only an enjoyable read, but will keep the reader guessing what all he/she can expect in the days to come because of the massive impact of online businesses.
  • A second book, I so much enjoy reading (and I can say that cause I havent completed it yet) is Tom Peters' Seminars. I recommend it for anyone who wants to try out a new thing in an enterprise. Basically, for an entrepreneur, the book breaks all the popular beliefs and goes literally 'beyond' disruption itself. If wanna break the rules, and set your own, this book is a must-read.
Well that's all for now.





Team Collaboration Tool

Sun, 9 Apr 2006 11:26:04 -0400

Recently with the spark of plenty of Social Communication Tools like Wiki, Blogs and the like; it seems highly likely that emails will now be the last choice for collaboration amongst teams in organizations (especially Software and related ones). However I am not seeing a complete solution dedicated for Team Collaboration or maybe I am a little hard to please ;). I thought over my 'wishlist' of features for such a tool which I think will be beneficial to a very large section of groups employed in various techie, non-techie and even social organization for their project management and team collaboration requirements. Ok, so now my concept revolves around empowering the lower and middle management, down to the last employee in the hierarchy to manage and inform the team members about day-to-day progress and making it easier for top management to drill up and down the required granularity of information and status needed. So let's say the tool is web-based where each team member has an account and he/she is part of a project. On a daily basis, each participant updates the progress status of the activity he/she is involved in and managers/leads responsible for project milestones update that information when met and such other things. The tool should provide: Categorizing activities/tasks: Each activity an individual is working on has to fall under a certain category and there has to be a hierarchy of such activities providing various levels of view for management and team members alike. Imagine a software project. Here there can be various top level categorization of a project's progress like Product/Feature Conceptualization, Product Design, Implementation, Testing, Deployment, Customer Support etc. Let's call this Level 1 categorization. Each of these can be further drilled down to still more granular categories like for Product Conceptualization might be either Market Research findings, Prototyping, Feasibility Analysis etc. This will be Level 2. And we can have n number of levels even further dividing the activities towards more precision. Now each individual, say a Software Engineer, would update the status of an activity daily. So say he/she is assigned certain module to develop OR certain bugs to fix; he/she does that and updates the system. Now based on these individual updates the system can give a holistic picture of what's going on in the project.   Now in the above chart (assume that it is an incremental development methodology). The top management and members sees the progress in each criteria of the project and they can further drill down to get details in each category. Also the same chart can have milestones marked out separately for each category and granularity. Say, development sub-team has a milestone to release beta version 0.8 of a product and testing has a milestone of loading and functional testing the same. With the changing nature of the industry, more and more beta releases are hitting the market, plus not to mention beta releases to internal customers, within the organization, if any. This might give a good view of things. Recording Everything: The biggest advantage that Wikis and Bloging is providing is archiving and easy access to data. The best usage of this would be to record everything. Like say a Design documents of a Component being developed. Now this document can be pressed into the system on a particular date, appropriately linking it as a milestone or task deliverable under a particular category. Now a person from any team needing that document, can hunt for it in the specific stream (category - Design) at any level of granularity, s/he can further do a granular search based on its time-range of publishing OR a specific keyword. It can also be used for tracking dependent/related activities (as explained in the next point). But textual data is not the only input that should be captured. Let's face it, its the informal discussions and sometimes client [...]



Blurring line between Legal and Pirated

Sat, 18 Feb 2006 04:33:05 -0500



Recently came across some Audio versions of popular books (they term it as 'Audio Books') on the net. Having tried a couple out, I can surely tell ya - the experience was simply amazing. Imagine this - its the Author who wrote the book narrating you scripts from his book, amazing clarity and joyous pace. It felt like attending to one of his/her seminar (which might well can be). All this for the price of a net access ! Digital Piracy is blurring the line between legal and illegal. It seems like it just cannot be controlled with the present tools.

Some time ago, I wrote on how print medium isn't going to go away anytime soon. But what I considered in that article was digital documents (.pdf, .doc). The audio versions of the same are a great replacement. Consider someone reading you the contents of the newspaper. Like you can select the news on a handheld device and next you hear the same (perhaps even giving you the digital document too). I just love the text-to-speech tools and have been longing for their entry into cell phones. But the 'Audio Books' I heard even out-classed them. Because it definately was a person speaking and he wouldn't make even the remotest mistakes (common with text-to-speech tools) and add an extra human emotion in the speech, making it some alive (a la seminar style).

DivX files are shared casually around; and their quality match those of DVDs, let alone VCDs. MP3s are available for every soundtrack ever produced and sold at drop dead prices on local streets. Even television programmes are available easily available completing the media entertainment spectrum.

None of the things I mentioned have an official channel of sale in India. (There are some remote services but they are not aggressively priced) You got to go the piracy way in case you want the best of new technology entertainment at your desktop/palmtop. It seems like the media industry has to invest heavily now to provide complete entertainment suites (including high-quality video streaming, MP3s or equivalent soundtracks, Audio Books, Recorded Television programmes) to everyone's home and that too at dirt cheap prices. If not, the broadband revolution will signal the end of the Audio, Video, Print entertainment industry.




Towards Meta-Programming

Sat, 4 Feb 2006 06:11:27 -0500



Recently I caught up on the new features on Ruby on Rails and Echo2. Amazed by the level of thought gone into creating these kinds of development platform (yes let me call them that, rather than programming languages) I find that its inevitable that soon we will give up Hardcore programming for redundent day-to-day tasks/features in applications. The move towards a more abstract layer, which Ruby/Rails terms it as "Meta Programming" is inevitable. We hear more about Rails than Echo2 is, I think, because Ruby is Open-Source and free. Ofcourse Ruby/Rails's got lot more features than Echo2, but the concept of Echo2 is too indirectly Meta-Programming on a smaller scale. Ultimately its industry acceptance which drives one technology over another, and Open Source sites with their large mouths shout their products louder to remove the other similar products/concepts around.

I am not saying that Ruby is the future, its Meta-Programming that is going to be the major change that development community is going to witness. Whenever Microsoft and Sun move over to support Meta-Programming in their languages, the whole community will move over to this new level of development abstraction as underlying redundent tasks are taken care of by the frameworks than the application code. Infact, I strongly believe, Microsoft intiated this Meta-Programming principal a long time ago with Visual Basic!

The change to the next level of abstraction in programming/software development is nothing new. We have already done it from Machine language-to-Assembly Languages-to-High Level Procedural Languages-to-Object Oriented Languages. Climbing each level, we let go of some control on the functioning of the system, like Java will allow lesser control to developers than C, but the easy of development is worth the switch. The real innovation that Meta-Programming brought along is the thought of the next level! People most of the time get used to something so much that they can't think of something beyond it or something better (I would be part of this group a lot of time too). If you want to find out where you belong try this - What's beyond Google? :) . So Meta-programming addressed - what was the next level from OO? Even though Microsoft started it through VB, Ruby/Rails have certainly extended the concept to a much broader scale targetting the industry's currently most painful area - that of web application development, which involves a lot of redundent stuff!

Again there was a phase in between when people targetted addressing some redundent tasks in Database schema development, GUI development and software design-to-implementation mapping. There were several products introduced here like Visio, Rational and the like, which promised reverse engineering, re-engineering and sort of design-to-implementation transformations/materialization. However the limitations imposed were much sharper, there was no support for later modifications or easy code additions after the transformation where the original design is still referable and navigable. There was even no support to by-pass the constraint and get your hands dirty by descinding down a level in case of dire need (like Java provides JNI to use native code). But their failures were crucial to realize what would work and what wouldn't when we try to go to a higher abstraction layer.

I am pretty excited about these developments. Ruby/Rails definately has the first mover's advantage. Don't think Echo2 would quite make it without backing of a strong brand like Microsoft/Sun. But surely MS and Sun and many others will catch this bus and that will decide how the industry achieves stability/unanimity on Meta Programming development.




Four Behavioral States of Individuals

Sat, 5 Nov 2005 05:35:10 -0500


(image) Another one from Thought Leader Forum. Norman L. Johnson presents a research done by 2 researchers on consumer behaviors in different product segments, giving rise to different forms of market structure - oligopoly, intense competition, product diversity, market share volatility etc. A very good read:

The box is a two-by-two matrix that shows four behavior styles, or states that a single individual may take. This matrix is not easy to understand. The horizontal axis deals with cognitive processing and the vertical axis deals with social processing. Along the cognitive processing axis, if your needs are satisfied, you don't think. You just keep going along as you have been. If your needs are dissatisfied, you may have to think a lot. For example, if you have a cup of coffee, your need for coffee is satisfied and you don't think about where you want to go to get a cup. If you don't have a cup of coffee and you want one, all of a sudden, you start going through scenarios about where to go to get it, what kind, when, and so on. Along the social processing axis, you're either certain about the future or uncertain about it. If you're uncertain, you tend to do more social processing-you tend to watch and rely more upon others. If you're certain, you tend to do more individual processing.

These two researchers did a small world model with 1000 consumers, each with the same behavioral propensities. In the first scenario, all of the consumers are Repeaters. The Repeater is satisfied and certain about the future. That scenario moves to a situation where there are few products with equal distribution, and the whole system is highly stable. In the second scenario, all consumers are Imitators. The Imitator is satisfied but uncertain about the future. In this situation, there are few products in the market, their distribution is unequal, and the system is highly stable. Because Imitators are socially active, the system performance converges on the stable state much faster than it does when there are all Repeaters. In the third scenario, all consumers are Deliberators. Deliberators are dissatisfied but certain about the future. There's some need that's not being met. This model is the closest to the rational agent model used in economics. The result is high volatility on all products. A lot of the economic models assume this. This may be why we see a lot of chaotic behavior. In the last scenario, the consumers are all Comparers. Comparers are dissatisfied and uncertain. They're both social and rational. Because of the social nature, the cycles are longer, but there is still a lot of volatility over a few products.

In summary, a Repeater system is highly stable with low diversity. An Imitator system is highly stable with moderate diversity. A Deliberator system shows short-term volatility on many products. A Comparer system shows long-term volatility on a few products. Individual behavior yields interesting global results. What we're missing in the model is the change in behavior due to feedback. How do people change from being Repeaters to Comparers, for example?





John Seely Brown's Predictions

Sun, 9 Oct 2005 11:04:06 -0400

John Seely Brown is the chief scientist at Xerox corporations. In the Thought Leader Forum of the year 2000, he had made some amazing predictions that have come true even in terms of the timeline he calculated. I am highly impressed with his vision of the future convergence of various scientific discipline and crafts, what he terms it as 'Judo'. Following are some of his important predictions from the same article: The confluence of three laws drives the new digital power and underlies the exponential pace of change. The first law is the law of communication, or the Fiber Law. This states that the capacity of fiber networks is doubling every nine or twelve months. The second law is Moore's Law, which states that the capacity of computational systems per dollar doubles every 18 months. The third law, which replaces Metcalfe's Law, is called the Law of Community. Metcalfe's Law states that the value of a network is the square of the number of members of that network. [The third law], however, was created in a world of local area networks. If we move to wide area networks such as the Internet, you are then looking at systems that support virtual communities. So, out of n people, how many virtual communities can you create? You can actually create 2n communities, versus the n2 number of relationships between n individuals. And if you take into account the web crawlers interconnecting these communities, you create a 2n2 relationship. This is a number astronomically larger than Metcalfe's law predicts. I believe it is this law that is driving the explosive increase of traffic on Internet backbone today. If you look at the number of communities, not the number of people, and recognize that they are also talking to each other through web crawlers and intelligent agents, then you begin to get a sense of the invisible dynamic that is driving unprecedented demand on the internet. We are learning how to bring bits and atoms together to create smart systems, systems that will experience the same cost curves as Moore's Law. But also think about what our world would be like if cars and planes actually followed Moore's first law. Nice metaphor, of course; but that is all it is. Well, what I am suggesting here is that the metaphor may turn into a reality when bits and atoms start to merge. If so, we are about to enter a new era affecting every material aspect of life. The Internet will pass through three phases. The first phase is the one we are all familiar with: a network of networks with computers talking to each other. Slightly more interesting is the second phase in which the Internet emerges as a medium. Over the next five or ten years, this will radically change what we conceive of as entertainment. [Kshitij: This mind you has come true in the exact duration specified, this is 2005]In the next ten years, we will see the Internet emerge as a self-aware fabric that is always in the background of our lives. Wireless will have a tremendous impact on not only the last 100 feet of the media phase, but also the last yard or the last inch of the fabric phase where thousands or perhaps even millions of little devices get tied together without tethering them with cables. As we move forward, we will need to shift systematically away from computer science metaphors to biological metaphors. A real challenge will be designing a computational immune system into the immense and complex networks. We must accept that these networks will not work perfectly all the time. Viruses are here to stay. As we build computational infrastructures that serve as the lifeblood of the world we must design them to be as robust as our own biological systems. This will require something equivalent to a computational immune system that can detect a virus and instantly attack it. The next major frontier for us is symbiotic computing. The first generation of computing was personal computing, followed by social computi[...]



Moore's GAP & CAP

Sat, 8 Oct 2005 14:50:44 -0400


(image) Geoffrey Moore, the creator of the Technology Adoption model, highlights how investors have to weigh companies based on their competitive advantage. He indicates there are 2 view of the advantage:

1. GAP - Competitive Advantage Gap
This is the differentiation of a company's offering from its competitors. Increased differentiation increases the probability of making the sale and allows companies to charge a premium for their products.

2. CAP - Competitive Advantage Period
This is the competitive advantage maintained over time. The longer a firm can sustain the GAP, the longer investors can forecast privileged earnings for the company.

Technology monopolies tend to have long CAPs, and this is reflected in the market capitalization of powerhouses like Microsoft and Intel. Investors believe that these companies will have a virtually unassailable position in the market for a very long time. Conversely, most equipment manufacturers have a very high GAP because of their innovation, but they have a very low CAP because their competition copies their innovations within a year or two.

To get the competitive advantage one, might opt for the following layers/strategies:

  1. The first strategy is just to refresh your product or service offering. This is a sufficient solution if you are still in the right market with the right brand and distribution channels, as well as a solid company.
  2. The next level of challenge is in execution. If your differentiated offers are highly variable in quality, then focus on value disciplines. You may need to focus on operational excellence, product leadership or customer intimacy.
  3. The third strategy is to strive for dominance in a market segment. This position's a company in such a way that if its execution flagged, customers would still buy from it. The better a company is wired into its customers, the more information it can gather from them about their future needs and the more the firm can stretch its CAP.
  4. The fourth strategy is for a company to design itself into the fundamental value chain of its industry as a category leader. If you can position yourself as Microsoft, then the entire industry will reinforce your leadership because it will not want to support a second operating system.
  5. The final card that trumps all of the strategies described above is catching a new technology wave. A new wave of technologies can make all of the competition's brand and execution and industry power irrelevant.
In mature markets, you see only half of this model because the lower levels do not often shift from year to year. Mature markets, like the automobile industry, see very little change. Market disruptions start at the bottom and reshape the entire market. The deeper in the market the shift happens, the bigger the "earthquake."






5 Myths of the Digital Economy

Sat, 8 Oct 2005 14:29:39 -0400


(image) Thought Leader W. Brian Arthur defines 5 myths of the Digital World:

Myth #1: All networks are subject to network effects.
Myth #2: There is a new economy defined by the latest technology.
Myth #3: All high tech resides locally.
Myth #4: Knowledge is easily transferred anywhere in the world.
Myth #5: The current political structure of the world will last indefinitely.





The trouble with Mumbai.

Fri, 9 Sep 2005 14:20:24 -0400


I don't hate Mumbai at all, but the state government seriously needs to do something for its residents. The flooding during the rains, the inefficient disaster management programs, infrastructural doom - are all almost cliche to the mumbaiites. It seems that the city is all about the privately setup malls with politicians aiming always to aiming to make it one of the shanghais and singapores of the world. I think that's the problem. Perhaps they are aiming too high. Perhaps we don't need flashy trains after 5-6 years, we just need MORE trains NOW. We don't need another airport after 10 years, we need easier accessibility to the current one. We don't need another shopping mall, we need a sports complex. Disasters do happen, and even the largest economy in the world may have problems coping up with it, but this can't be used as an excuse to shy away from the obvious lack of planning. We need some sensible planning and that too VERY SOON! Without it, I see Mumbai's glamour being stolen, distributed amongst the smaller cities within and outside India.





Print Media won't be out of fashion anytime soon!

Fri, 9 Sep 2005 13:51:08 -0400


Online blogs and news sites have definitely made many netizens hook on them for their daily feeds. I think there's hardly any information that you cant get online which you can get otherwise from the print medium. However the print medium still has good advantages which the online counterparts cannot currently match.
  1. The leisure of carrying it anywhere: Even if I do get much info. on my cell via WAP, it ofcourse hasn't reached the level of convenience of carrying almost anything. Print medium will be continue to be THE CHOICE when it comes to reading while traveling or at any place other than the home and office.
  2. Less straining to the eye: As most of the professionals work with a computer all the while, reading 'offline', so to say, is very inviting. I can't imagine reading a book online or anything lengthy enough to strain the eyes.
  3. The Digital Divide: Ofcourse people without access to a computer, have no other choice then the print version. It is however the standards and formats that newspapers have established that many of the older generations refuse to shift from.
  4. Language barrier: Try to get anything for a content in a language other than English and you will hail the Print. European languages and even East Asian countries have solved this to some extent. However for a country as vast as India, even the national language is not catered to strongly. I must however admit that this need for content in anything other than in English is decreasing day by day as the younger generation knows what will fetch them the outsourcing jobs.

To overcome the same will probably require a device that can cater  to solve the problem (2) of the young professionals and problem (3) for the older generations. Devices like Sony's Librie, which uses the E-Ink technology are a good attempt towards it. However these devices have to come out with a standardized format (like the mp3) and multiple devices supporting the same will follow. The competition ofcourse benefiting the users with reasonable pricing and quality. However it doesn't seem to be happening anytime soon (everything online is just taken for granted somehow) and so Print Media can rest at ease.





Where ART meets application

Sat, 3 Sep 2005 15:16:52 -0400



(image) deviantART  is a wonderful site if you are a fan of good photography. The site I feel promises something for everyone's tastes. In a world of digital photography, almost anyone can put his creative talents to a test to pull out extraordinary images. The site is specifically about ART, so one can find even sketches and stuff. I frequent there just for the thirst of finding out how a few people see beauty amongst the simplest things and capture them for a lifetime. As a personal recommendation, do check out the Architecture and DarkRoom sections.




My new bloging guidelines

Wed, 31 Aug 2005 13:53:16 -0400



I have decided that I would re-style this blog to a set of new 'guidelines'. Many of the points have been inspired from various other blogs I like to read and some appear to me quite intuitive and wanting. So here goes:
  1. Bulleted Points: Writing things down in bulleted points helps a lot while reading. If you want to read specific points, it helps to identify them fast. If you want to skip a view, you know where to begin from. It brings order to the writing I feel.
  2. Write Short: Read this from a website, writing long articles is out of fashion. If you can't get the point across in fewer words, you better split the article. Large text scares people, they will keep it aside for later which will never happen eventually.
  3. Diversify the focus: Many a times, if you concentrate too much on what can be part of the blog, you can miss out some good thoughts along the way on parallel areas of your interest or concern. Venturing into diverse topics gives a refreshing feel to the weblog as well as the blogger who can try a hand at things he does have an interest towards. For me, its going to be things other than techie stuff, which I did start earlier on, it now will be more refined.
  4. Links-only seperately: 'Goodies' as I will call it, links pointing to interesting reads or watch or hear on the web. If I don't have much of a comment on it, but really want to keep a link to it, will demand a seperate short post with just that. Got this idea from the Corante site. (who terms it as 'blink')
  5. No schedules, no timelines: There isn't going to be a daily update nor will I restrict it to one post when I am in the mood, or have the stuff. It will be when I want it, how much I want. This will surely get the best out of me, and this blog won't be a routine work but a refreshing change, all the time.



If you don't run out of things to read, Lucky You

Wed, 31 Aug 2005 13:31:15 -0400


After more than 2 months of inactivity, I feel sad that I am not able to put in time for this weblog. It's a pity that I just don't seem to find the time to update the same these days with my job eating up most of it. However with renewed interest, some reignited energy and new ideas to reshape this blog and put it back onto the road, I continue my journey of penning my experiences and thoughts for myself and others.

Inspite of the lack of time to write, I had dedicated a lot of time for readings. And I strongly believe that if you don't run of things to read, you are a really lucky person. A whole lot of knowledge exists amongst the pool of data which is the web. The act of filtering the same to suit one's interest is a tough task, and if you are piling up such things which interest you, then Lucky You.

I realized this a few days back when I virtually exhausted my reading set and happened to have the frightening thought that it was infact me who could not find more. Oh well, with that thought, I am hoping to put things back on track for this blog of mine.




Apple trend comes alive!

Sat, 18 Jun 2005 10:56:18 -0400

Another trend which I anticipated about Apple's Mac OS being ported to the x86 hardware comes alive.




Size affects productivity, creativity!

Sat, 18 Jun 2005 10:48:56 -0400

From VC blog:

I think Google has become so mainstream and so ubiquitous in our everday Internet lives that its lost its mojo in some ways. That doesn't mean it won't continue to be hugely relevant, hugely profitable, and hugely important. But it does mean that there's a vacuum that can get filled by others who are small, innovative, new, and exciting.

Google has recently launched some very attractive web services like Google Local and Google Maps. Their SMS service is a killer app for cell phones. It seems like they are launching a new web service every week. It's so fast and furious that it is making my head spin. But I don't understand how all of these new web services have anything to do with their core business of targeting advertising via search and contextual advertising. Do these services create more inventory for them to sell? Do they generate more data that allows Google to increase the relevance of the advertising? In some cases, like Local and Maps, I see the logic. In many other cases, it just seems like a laboratory turning out cool stuff and seeing what sticks.

And while they crank out more and more new stuff, their two core products, Search and Adsense, seem to be suffering from a lack of innovation. Adsense doesn't perform very well for publishers. So much so that many publishers are turning back to banners. And Google is also turning to banners. It's back to the future. That's not innovation.

But size is the enemy of efficiency and innovation. And Google has become a very big company very quickly. They are in Starbucks and McDonalds company now. That's great for them but its also great news for the little guy like Joe who can make a better cup of coffee or a better web service.

I agree in totality. Size affects decision making, which affects innovation, which negatively impacts the driving force of interest and achievement in the company leading to lower productivity. I always have rated innovative product companies higher than monotonous outsourcing giants. It is a matter of great interest, that many smart companies continue to remain small in staff and grow exponentially at the same time.




Impulse Information

Sat, 18 Jun 2005 10:45:08 -0400

Picked this one up from EMERGIC again. Anita defines 'Impulse Information' as:

Impulse information is something that you need within a few seconds of thinking of it. If it takes too much time, then your addiction and impulse wears off. You want to find that one thing. You want to find it fast. You want to find it now. You know what it is you are craving. The challenge is just to get it quickly. You don't want to browse through a lot of pages. You don't want to sift through irrelevant content. You don't want to be bogged down by massive hierarchical structures. You want something flat, quick, and satisfying.

She has defined it in the concept of Web Search in particular, but it is more and more relevant for all modern applications as well. Sometimes a user has seen some information and wants a link to something interesting about it, which it knows is present in the same system. Detecting this trend and accepting and processing random, ad-hoc user queries is top priority these days.




SaaS: Good and Bad applications

Sat, 18 Jun 2005 10:44:07 -0400

Amy Wohl comes up her list of Good and Bad SaaS implementation ideas:

Good:
1. Net native applications, written to be delivered from a shared server, across the web, to a diverse population of customers who will be able to administer their own accounts.
2. Applications for which there is no differentiating value to your organization.
3. Applications which you need only occasionally (or which only a few of your employees use regularly) but which are expensive to install and support.

Bad:
1. The application is mission critical so that your IT department and your senior executives are nervous, very nervous, about letting it (or its data) be anywhere outside of their complete control.
2. Your application requires a great deal of customization.
3. Your favorite application was never designed to be run in a multi-user environment and forcing it to do so makes it very expensive or very slow.

I think point 3 from the Good list will be driving force for near future SaaS applications. It will be like the recent Sun's effort of selling Grid Computing for $1 per hour. Although it was not exactly 'Software' that was sold, the setup would be far from the reaches of small enterprises.

I disagree however on point 2 of the Bad list. I think customization can be provided, it is just how the system is designed. It will be plugable, one solution cannot fit all, applications which are given as a service will form part of the solution and not the total solution. The selection of the services in turn will make it highly customizable.




XML unifies all

Sat, 18 Jun 2005 10:43:29 -0400

Office documents to go XML. That will be the new default format in which MS Office is going to save in. This will surely make them highly searchable. Should PDF follow suit? Should there be one general format that desktop search tools can optimize on, to get information? Well it seems XML will greatly simplify that.

But it will be once again Raw data splashed over I guess, cause it will be something like specifying that this data belongs to the same paragraph, or this text is bold, this font, this background, something like HTML? Annotation can really help searching relevant information and XML can really help in annotating content, even images. If used effectively XML can very well be what the world defines as Information - data with a context.




Bluetooth stayed.... surprisingly?

Sat, 18 Jun 2005 10:42:47 -0400

Bluetooth is a technology that was hyped quite a lot during its early days of discovery, but it is now that companies are coming up with products using the same and surprisingly the rate is increasing even with WiFi and WiMax (in the near future) in the picture. Bluetooth recently was used for connecting mobile phones to landlines whenever a network is available, like at home or office. Given the limited range of Bluetooth, it is surprising that cell phone manufacturers just want to stick to it more and more.



Innovation in Maturing Markets

Sat, 28 May 2005 06:58:24 -0400

With market maturity, innovation takes a backseat. Philip Lay, in the recent 'Software 2005' conference, brought forward some excellent insights on the roles and types of innovation in the pre and post Mature Main Street phase of the TALC. Disruptive innovation kicks of the first phase of TALC, the early adopters. An Example of this is VoIP. I see even RFID and RSS fitting in this category. Next comes Application Innovation in the Bowling Alley phase. These are immediate raw low level services offered with the usage of the disruptive technologies. Examples include SMS, and I feel the current WAP sites, Weblogs and even Google APIs coming under this. In the Tornado, the phase where things get rolling really fast, we see Product Innovations, and that is exactly where current consumer electronics and embedded systems lie. Examples include iPOD, TiVo, maybe even Yahoo services under the common ID mechanism. I doubt whether Google would come under this or not, cause its basically just search till now. Platform Innovation is what Software 2005's industry visionaries and leaders think that software technology as a whole has reached and it is prevalent, as Philip says, in the early Main Street phase. Examples include Relational databases, sighting Oracle among this. I think this is surely where Google, people believe, is headed - its own Web OS platform, but yet unproven. Even J2EE, .NET application development frameworks would fall under this and embedded systems would reach it, hopefully soon. That was pre Main Street. In the mature markets, Philip sights 2 broad fronts on which Innovations divide themselves up. First is the Customer Intimacy phase and it includes: Line Extension Innovations are where the product is presented in different flavors for different needs. Examples he has enlisted include Inkjet printers (HP) and Servers (Sun). However I feel Sun really deserves a lot of credit for the plethora of features it has come in its version 10 offering, certainly more than a Line Extension. Enhancement Innovation is when the product is given minor enhancements as in the case of Mainframes (IBM) and Laptops (Sony) as noted by him. Marketing Innovation, needless to say, is where the product is just given a feel-good factor with building extra facilities around it. Examples include Dedicated storefronts (Apple). Experiential Innovation, as in the case of Executive Dashboards (Cognos) and Mediated Internet (AOL), is just an experimental addition to the product perhaps without a real market insight or to test out the market. The other category of Innovations belong to the Operational Excellence Zone. These are enhancements in the routine operational usage of the system and include: Value Engineering Innovation: Example, Storage ATA RAID (Nexsan). These fulfill certain non-functional requirements like Security, Scalability and Failover. Process Innovation, where the workflow is enhanced as per the changing trends and market experiences, e.g. Online Retail Checkout (Amazon) Integration Innovation is perhaps the most challenging and the buzzword these days. Integration Innovation includes integrating your software with existing systems in place to help transparent portability for the enterprise to your application. It also includes making your application as a plug-in for the existing platforms and also allowing other applications to plug-in to yours by generic APIs. Examples include ERP/SCM/CRM systems (SAP), Semiconductor chips (Intel). This perhaps comes n[...]



'Platformification' of Embedded Systems

Sat, 28 May 2005 06:55:04 -0400

In the recent 'Software 2005' conference, the focus was on the understanding the changing software ecosystem in the US, where software technology has surely reached the 'Mature Main Street' phase in the TALC model, in general. The focus is now on developing platforms, turning disruptive innovations into broader platforms to reap the benefits of convergence. 'Platformification' of the embedded systems marketplace (PDA/Mobile, Industrial Automation, Auto etc) seems to be the next target and I think that really makes a lot of sense. A platform will create a level playing field with business applications makers actually having a set standard to work on for catering to a wide audience. Along with the central platforms providers, the ecosystem around it will include players like the OEMs, Operating Systems makers and chip manufacturers.




Moore's Core and Concept ideas

Sat, 28 May 2005 06:53:27 -0400

Geoffrey Moore, I feel, is the kind of person who builds up from his model or his established standards, like Chris Anderson does with the Long Tail. His 'Technology Adoption Lifecycle' (TALC) model is well established now and on his recent presentation at the OSBC, he mapped the same to Open Source Products and put up the question of where they stand on it, concluding that it is perhaps in the 'Tornado' phase.

 

In the same presentation, he brought up interesting terms in Product development phase, naming them as "Core" and "Concept".

 

He represents Core as:

Any process that contributes directly to sustainable differentiation leading to competitive advantage in target markets.

 

and Concept as:

            All other processes required to fulfill commitments to one or more stakeholders.

 

He reinstates that as markets mature, offers commoditize. So the Core turns into Concept for the company as they start to search for newer cores. I agree with him on that, in case of Open Source, focusing on concepts is of prime importance and let the user of the Open Systems define their core and work on it, using the functionalities or standards of Open Systems as a base. Open Source work best at platform providers or library implementations for common use and proprietary software are better at addressing core functional requirements.

 

I think Open Source would remain as the standardized enterprise platform enablers (for applications like servers, browsers, databases) and as library implementations for a long time. It would not do as well for end-products. It would continue to provide services, customizations to proprietary firms for development of the final product for end users.




Long Tail vs. Bottom of Pyramid

Sat, 9 Apr 2005 11:07:23 -0400

Got this Chris Anderson piece via EMERGIC: Is the bottom of the pyramid the Long Tail? The similarities are notable. Both theories are based on the notion that if you break the economic and physical bottlenecks of distribution you can reach a huge, previously neglected market. They both recognize that millions of small sales can, in aggregate, add up to big profits. And they're both focused on ways to lower the cost of providing goods and services so that you can offer them at lower price point while still maintaining margins. But despite the fact that it took me a trip to India to clear my head on this, I think there is a key difference between them that makes them fundamentally incompatible. The Bottom of the Pyramid (BOP) argument is essentially based on commodification. Take existing goods and services and make them an order of magnitude or two cheaper, either to buy or to make but ideally both. Typically, this means reducing goods to their bare essentials and delivering them on a massive scale. This requires: 1) low price points; 2) minimal marginal costs (reduce consumables and packaging to the bare minimum); 3) "de-skilling" services so non-experts can deliver them; 4) the use of local entrepreneurs. The BOP model is focused on taking a single product or service and finding ways to make it cheap enough to offer to a larger, poorer, market. This is why I think it's essentially about commodification. The Long Tail, on the other hand, is about nicheification. Rather than finding ways to create an even lower lowest common denominator, the Long Tail is about finding economically efficient ways to capitalize on the infinite diversity of taste and demand that has heretofore been overshadowed by mass markets. The millions who find themselves in the tail in some aspect of their life (and that includes all of us) are no poorer than those in the head. Indeed, they are often drawn down the tail by their refined taste, in pursuit of qualities that are not afforded by one-size-fits-all. And they are often willing to pay a premium for those goods and services that suit them better. The Long Tail is, indeed, the very opposite of commodification. So the Long Tail is made up of millions of niches. The Bottom of the Pyramid is made up of mass markets made even more mass. Both lower costs to reach more people, but they do so in different ways for different reasons. They're complimentary forces, but fundamentally different in their approach and aims. The Mystery of the Apparently Similar Theories: Solved. Solved yes, it is. However what one immediate might gather from this is that - As an entrepreneur, one should rather follow the Long Tail's "nicheification" rather than BOP's "commodification" as Long Tail focuses on pure profitability, unlike social welfare centered BOP. But if one follows closely, you will find that both the theories are not actually competing with each other. For developing or under developed mass market economies, Long tail opportunities are quite few... its rather a BOP's commodification haven. Long tail is probably suited for the top class developed economies where it is feasible to quote a premium or equivalent price for goods not in peak demands and people are willing to pay for it to satisfy their unique taste and preferences. While BOP theory necessitates bulk sales from the[...]



The future of Digital Music

Sun, 27 Mar 2005 03:17:56 -0500

Corante has an interesting discussion going on speculating about the trends expected in the Digital Media industry. Some key points which I found worth pondering over: More User-centric Approach: The underlying operating paradigm in the music industry has been one of wanting complete and unfettered control (both of the artists, as well as of the fans / 'users'), in fact, of often wanting control more than more revenues! The fact that the music biz continues to try and seize control is very disconcerting and so, at this point, the industry is being dragged kicking and screaming into the digital age, which clearly is about giving control the 'user' aka the customer. They should all take a page from EBay, Amazon, SouthWest Airlines, Tivo and Netflix and empower the customers. They were used to looking at themselves as the ones in charge of their own kingdoms, and therefore by extension were in charge of what their customers can or cannot do. With that type of attitude still lingering on, it is very hard for them to look under the hood and accept that their core business model and operating mode is being rapidly outmoded. Subscription model: People are not going to repeatedly buy a massive amount of music using [the iTunes] model - ask anyone that owns one. On the other side of the equation, a model similar to Netflix (or Napster To Go), with people 'renting' music for a limited period of time rather than owning it, will be a model we'll see more of very soon. The artists and writers will make money by taking a percentage of the fees that are charged for renting access to their music. You establish a monthly subscription, you track what is rented and you play the content owners (including writer, publisher, artist and label) a pro-rata share of the amounts collected, based on actual use. 'Long Tail' effects: The 'unpopular' (or lesser-known) titles earn a disproportionately large share of the total revenue; in fact the aggregate revenues of all lesser-known titles are often larger than those derived from the top-rated and most popular titles; which means that even lesser-known titles stand a good chance to be monetized. Finally, you can make money by selling niche music to niche markets because the hurdles of distribution are removed or at least lowered. In my view, the biggest and most lucrative potential is clearly in niche markets, such as channels that offer very specialized music, such as jazz channels, new age music, folk music, ethnic music, that sort of thing - global niche markets whose total populations will add up very nicely. Complementary businesses: A music rental site could sell merchandise, concert tickets, fans clubs, special event access and other stuff around the core rental business. Mobile platform - an opportunity: One interesting trend is starting to take place in Asia, particularly in China. At the end of 2004, China had 334 million cell phone users (that's close to the total size of the U.S. population!). Therefore, the potential for the cell phone to be the prime distribution pipeline for digital music in China is absolutely mind-boggling because these consumers are very likely to use their cell phones to go onto the Internet (not a computer), in the not too-distant future. Simply put: I think that the mobile music opportunity dwarfs the PC/Internet music opportunity; and: they are converging.   [...]



Ajax? Probably just AX after some time.

Fri, 25 Mar 2005 10:16:46 -0500

Lately Ajax has been in the news a lot after Google Suggest and Map used it. It is basically a combination of JavaScript, HTTP and XML to render controls smartly on web browsers (not the only thin clients actually). Now its usage tends to import minimum amount of data from the server quickly, so that the user gets the feel of everything almost at the pace of desktop applications in today's broadband world. XML.com and AdaptivePath have more on the same: One of the classic drawbacks to building a web application interface is that once a page has been downloaded to the client, the connection to the server is severed. Any attempt at a dynamic interface involves a full roundtrip of the whole page back to the server for a rebuild--a process which tends to make your web app feel inelegant and unresponsive. A solution to [this] problem presents itself in the form of the XMLHttpRequest object. This object, first implemented by Microsoft as an ActiveX object but now also available as a native object within both Mozilla and Apple's Safari browser, enables JavaScript to make HTTP requests to a remote server without the need to reload the page. In essence, HTTP requests can be made and responses received, completely in the background and without the user experiencing any visual interruptions. This is a tremendous boon, as it takes the developer a long way towards achieving the goals of both a responsive user interface and keeping all the important logic in the application layer. By using JavaScript to ferry input back to the server in real time, the logic can be performed on the server and the response returned for near-instant feedback. Ajax isn't a technology. It's really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates: * standards-based presentation using XHTML and CSS; * dynamic display and interaction using the Document Object Model; * data interchange and manipulation using XML and XSLT; * asynchronous data retrieval using XMLHttpRequest; * and JavaScript binding everything together. Instead of loading a webpage, at the start of the session, the browser loads an Ajax engine - written in JavaScript and usually tucked away in a hidden frame. This engine is responsible for both rendering the interface the user sees and communicating with the server on the user's behalf. The Ajax engine allows the user's interaction with the application to happen asynchronously - independent of communication with the server. Now the technology is great, due credit to Microsoft for the same. But it was only after Mozilla adopted it too, that companies like Google took efforts into promoting such a thing as its mainstream offering. (They did not want it to be browser specific, especially IE ;) ) No doubt Suggest and Maps are very well done and the age of asynchronous communication is gunna stay. However its one part of Ajax which perhaps wont continue for long I think and that's the "j" part of it. Ajax is 'Asynchronous JavaScript And XML (or XmlHttp)'. Now JavaScript is a scripting language long known to handle basic validation stuff for applications and building some 'dynamic', 'interactive' menus/controls in the interface. It is a scripting language which has plenty of drawbacks including majorly its non-object orientation schemes. JavaScript was not designe[...]



Some search engines thought differently, when will the users do?

Sun, 20 Mar 2005 08:22:13 -0500

I recently came across some innovative search engines that thought out-of-the-box, trying to grab some piece of the search engine market pie. Firstly, the one which impressed me the most - KartOO. The best thing about this was ofcourse the interface. It linked results like a data warehouse/mining software and shows virtual groupings among the results. Good if you are doing some research on some topics and want to know about particular aspects of certain things. The only problem that I faced while searching through KartOO was that my thinking was too "Googlish". KartOO displays the search text box too, but it additionally gives a map in Flash, about how people think of or associate pages. So the question - when will users learn to adapt to the new innovations? It will surely take some time for the masses to think differently and that's a big plus for the already established Search Engines.

Second in line were "human" classifiers. Like Furl and Topix, which impressed me the most. Furl has a USP which got me instantly hooked onto it. Saving pages for later and bookmarking which I can take along with me i.e. accessible from anywhere. Topix too are good articles intelligently classified into categories relevant and also some local filtering too, if desired. I think the Furl model is the best for initial user appeal. It offers something others don't and for that people start using its other services as visible on their screens ;) For those who question that Furl is a search engine too, should probably think potential and not current status. It's classification is done by humans and therefore "currently" more intelligent than AI operations. Also the classification is done by people who use the service and not managed by people dedicated by the site itself and therefore cost-effective and done in bulk.

Lastly one more effort conducted from a long time is answering questions rather than searching keywords, initiated by AskJeeves, followed by some like Brainboost. This effort I personally feel has FAILED. The results just look like a normal search conducted on keywords. The vastness of questions are left unanswered and probably this may not be the way people will search in the future, even if it quite logical that people think in questions.

 




Can Papers End the Free Ride Online?

Sun, 20 Mar 2005 08:21:29 -0500

NYT writes... Consumers are willing to spend millions of dollars on the Web when it comes to music services like iTunes and gaming sites like Xbox Live. But when it comes to online news, they are happy to read it but loath to pay for it. Newspaper Web sites have been so popular that at some newspapers, including The New York Times, the number of people who read the paper online now surpasses the number who buy the print edition. This migration of readers is beginning to transform the newspaper industry. Advertising revenue from online sites is booming and, while it accounts for only 2 percent or 3 percent of most newspapers' overall revenues, it is the fastest-growing source of revenue. And newspaper executives are watching anxiously as the number of online readers grows while the number of print readers declines. "For some publishers, it really sticks in the craw that they are giving away their content for free," said Colby Atwood, vice president of Borrell Associates Inc., a media research firm. The giveaway means less support for expensive news-gathering operations and the potential erosion of advertising revenue from the print side, which is much more profitable. As a result, nearly a decade after newspapers began building and showcasing their Web sites, one of the most vexing questions in newspaper economics endures: should publishers charge for Web news, knowing that they may drive readers away and into the arms of the competition? Of the nation's 1,456 daily newspapers, only one national paper, The Wall Street Journal, which is published by Dow Jones & Company, and about 40 small dailies charge readers to use their Web sites. Other papers charge for either online access to portions of their content or offer online subscribers additional features. "A big part of the motivation for newspapers to charge for their online content is not the revenue it will generate, but the revenue it will save, by slowing the erosion of their print subscriptions," Mr. Atwood said. "We're in the midst of a long and painful transition." Most big papers are watching and waiting as they study the patterns of online readers. Analysts said that the growth in readers was slowing but that readers appeared to be spending more time on the Web sites. "We're always looking at the issue," said Caroline Little, publisher of Washingtonpost.Newsweek Interactive, the online media subsidiary of The Washington Post Company. She said that the online registration process that most papers now require for use of their Web sites, while free, lays the groundwork for charging if papers decide to go that route. "You're getting information from your users and you can target ads to your users, which is more efficient for advertisers," she said. "This has been a dipping of the toe in the water." This has been an observation or rather a question from my side too for a long time. How long can free be sustained? Can Advertisement completely cover the online production, maintenance, distribution costs? Google is surviving on the online text ad revenue model from some time, is that model sustainable? Or is the age of subscription charges for online services like Email, Search, News about to dawn? Only time will tell, but it seems that [...]



Furl like that.

Sun, 20 Mar 2005 08:20:55 -0500

I recently found Furl.net from one of John Udell's articles refered by EMERGIC. I instantly got hooked onto it. It is a (currently) free service that helps you organize your bookmarks online and even keep a saved copy of the page on that. It gives out browser plug-ins to quickly "furl" a page and copies all the current contents (incl. Ads) to the saved copy. The reason for this ofcourse is dead links later. The concept is simple, but fabulously executed. It allows to attach keywords, comments and categorize the article. It allows you to publish your bookmarks so that others can refer to it (or not, if you wish so). You can get the "hot" furl additions of the day/week from the site which simply are the ones people are increasingly "furling" for their own. And ofcourse a nice search interface, makes it a complete online bookmarking service with portability (access from any browser on any machine), online storage (well hope they are ready for the bulk loads) and community networking. It seems Furl has hit the jackpot and is definitely more user-friendly than del.icio.us. This fits in nicely in the next generation search engine framework, lets see if the search giants, adapt to this or buy it out. Furl certainly is here for the long term. But in hindsight, I feel there might be some issues creeping upon as some companies/governments might not like the idea of online availability of deleted/removed/banned contents.




The theory of the Long Tail

Fri, 18 Mar 2005 13:21:00 -0500

Chris Anderson has impressed me a lot with his writings on the theory of the Long Tail. I think I will probably be mildly biased to his theory in some of my future writings and thoughts. The theory portrays how the capacity and opportunities of the niche should not be ignored in the "hit-driven" economies of the world. He points out how almost everything can have a potential market and how customers can pay for them if given a "choice" to own them, use them, subscribe for them etc. His writings suggests that he thinks the benefits of targeting the Long Tail by businesses is now even more attractive with the current technology innovations removing the constraints of shelf space, stocking costs and peer references. More from his own type: [It is] an entirely new economic model for the media and entertainment industries, one that is just beginning to show its power. Unlimited selection is revealing truths about what consumers want and how they want to get it in service after service, from DVDs at Netflix to music videos on Yahoo! Launch to songs in the iTunes Music Store and Rhapsody. People are going deep into the catalog, down the long, long list of available titles, far past what's available at Blockbuster Video, Tower Records, and Barnes & Noble. And the more they find, the more they like. As they wander further from the beaten path, they discover their taste is not as mainstream as they thought (or as they had been led to believe by marketing, a lack of alternatives, and a hit-driven culture). Most of us want more than just hits. Everyone's taste departs from the mainstream somewhere, and the more we explore alternatives, the more we're drawn to them. Unfortunately, in recent decades such alternatives have been pushed to the fringes by pumped-up marketing vehicles built to order by industries that desperately need them. Hit-driven economics is a creation of an age without enough room to carry everything for everybody. Not enough shelf space for all the CDs, DVDs, and games produced. Not enough screens to show all the available movies. Not enough channels to broadcast all the TV programs, not enough radio waves to play all the music created, and not enough hours in the day to squeeze everything out through either of those sets of slots. This is the world of scarcity. Now, with online distribution and retail, we are entering a world of abundance. And the differences are profound. With no shelf space to pay for and, in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees, a miss sold is just another sale, with the same margins as a hit. A hit and a miss are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability. The industry has a poor sense of what people want. Indeed, we have a poor sense of what we want. We assume, for instance, that there is little demand for the stuff that isn't carried by Wal-Mart and other major retailers; if people wanted it, surely it would be sold. The rest, the bottom 80 percent, must be subcommercial at best. To get a sense of our true taste, unfiltered by the economics of scarcity[...]



Flash on your mobile

Thu, 10 Mar 2005 11:21:38 -0500

Macromedia had announced sometime back that it had signed up a licensing agreement with Nokia for providing Flash Lite support on Nokia's Series 60 phones. While I was overwhelmed by the news of Flash's entry to mobile phones, been keenly waiting for it for over a year myself, I was slightly dejected by the fact that the adoption might not be for all phone makers. Nokia, no doubt, is the current market leader, but it is diminishing constantly to Samsung, Motorola and the like. I hope Macromedia makes its product available for other phone makers too, else Flash might end up the EDGE way, with not many handsets around, making the technology not a mass hit. I would have rather preferred it going the J2ME way. Oh well, but its a start anyway. But it seems, I was not the only one waiting, there were others too in the queue. Russell Beattie keenly awaited it for 2 years, as he puts it. More from the link:

Flash Lite 1.1 still has some issues (like no file support - you can't save state, and the fact that it's based on older Flash 4.0 tech) and that the developer tools are still oriented towards designers not programmers, but still it's a great announcement. From what I've seen of Flash Lite, the applications developed are smaller, more compelling and quicker to program than their J2ME MIDP counterparts. Flash Lite will add another "middle-layer" programming platform to the Series 60/Symbian OS. Python will be great for hackers and maybe corporate developers, Flash will be great for consumer media-based apps. I can't believe it took this long to happen.

This is a great announcement, I can't wait to start seeing the Flash content flow. First thing, Macromedia and/or Nokia needs to make it a one-click option to grab the mobile player from their sites - the press release insinuated as much, and I hope it happens soon. My other wish is for Macromedia to create an IDE for Flash Lite - I can't deal with freakin' timelines. Give me drag/drop controls and a text editor for the Action Script please!

Completely agree with him on the first point. Flash will be for entertainment WAP sites or downloadable entertainment application. I can sense the Movie and Music industry going all Wappy with this tool and Games to follow. J2ME will be more for business applications, Yes. Unfortunately extensions of business applications has only been restricted to the PalmTops so far. I sense the change as soon as the memory on standard phones increases dramatically.

For the second point, I must admit I never thought in that direction. Me, ofcourse being a Java Software Engineer, I would love Widgets and coding a la Eclipse or Visual Studio style. If Flash is anytime wanting business applications makers to take it seriously, it has to provide this kind of development environment. All in all, Good Luck Macromedia, hoping to see Flash on my mobile soon!




Trends materialize: Yahoo and Music

Wed, 9 Mar 2005 11:06:07 -0500

I had already anticipated this move long back, of Yahoo doing good in the Music industry in one of my previous articles and along comes news of Yahoo's plans of launching of a Music Player and Music Store to add to its plethora of services on the net. Yahoo is so well placed right now with its ID mechanism in place and doing wonders. Its Mail, Briefcase, Groups, Messenger, Launch, Geocities, Games, Chat, Mobile, Photos are so well integrated that every user just tends to explore it many a times without reason.

Anyway, back to Music, CNET has more:

Yahoo's full-fledged entry into the digital-music retail business could help shift a market that has remained tilted strongly in Apple's favor. Yahoo has already built a large and loyal following for its streaming-music and video service, and could parlay that into music sales. Indeed, the company's Launchcast radio services was the highest-rated Webcasting service online in January, according to ratings firm Arbitron and ComScore Media Metrix, attracting more than 2.2 million people that month.

However, Apple's dominance has been challenged by other giants, ranging from Sony to Microsoft, without substantially decreasing the iPod maker's market share. Last week, Apple said it had sold more than 300 million songs through its iTunes store since its launch.

"You have to look at how to create a linkage between a device and the online service," GartnerG2 analyst Mike McGuire said. "But given Yahoo's traffic and their very active communities, the potential (for success) is there." Yahoo has begun to streamline its music and multimedia properties over the past few months, changing the name of its Launch site to Yahoo Music and consolidating its entertainment businesses in a Santa Monica, Calif., office near Hollywood. The new MusicNet-powered music service will be integrated into Yahoo's existing infrastructure, possibly including features such as links to its popular instant-messaging program, sources said. MusicNet's technology allows companies to offer subscription services or per-song downloads, and is used by Virgin Digital, America Online and others. Sources close to the company said the new service is likely to launch by the end of the month.

Now things like these make me lament on the fact that I haven't updated this blog for almost 2 months now. Something I hope to be correcting real soon. Well this is a start!




Past, Present & Beyond: 2004 in Review: Cellular Explosion

Wed, 12 Jan 2005 11:48:09 -0500

The Mobile industry jumped leaps and bounds this year. Camera phones proliferated, the race towards 262k screen colors, getting in MP3s players, video recorders, 3G, Wi-Fi, mergers and ofcourse Windows and Linux make their way inside. From CNET... Hybrid phones make a splash. By late spring, cell phone makers were introducing Wi-Fi phones, bringing new threats and opportunities to wireless carriers and traditional phone service providers. The highly anticipated hybrid phones let people make connections through a local wireless Internet access point, switching over to a cellular network whenever necessary. The result: greater flexibility in mobile communications. Hybrid handsets can use both data and voice applications, with most of the attention focused on data until recently. But that's changing, thanks to technology improvements for managing call transfers between Wi-Fi and cell phone networks and the increasing popularity of VoIP on corporate networks. Early versions of Wi-Fi cell phones failed miserably because of the enormous drain on the batteries--which must support two chipsets rather than one--and because users were forced to manually switch between networks. But at least one phone maker, Motorola, now claims to have solved the automatic transfer problem Expected? Hell Yeah! Hybridism continues and there's nothing stopping it. These phones will not remain the exception but become the norm. However there's still time for their adoption. So probably 2005 will just make the scene more promising for the future. Except for the cost, there even network bandwidth that one can save from this device. However switching calls without disconnection will be the biggest hurdle. It seems some companies are already claiming that. Also this will flare up more alliances (if not mergers and acquisitions) and make the networks more interoperable.   3G? 4G? Wi-Fi? VoIP? It was definitely a year of confusion and that is not yet solved. The hybrid solution perhaps is the best bet everyone can have, but what all will comprise the hybrid? Let's see the progress in some of these technologies: 3G: The promise of wireless broadband has been tantalizing mobile mavens for some time now. Cellular providers such as Verizon Wireless, Sprint PCS, Cingular, AT&T, T-Mobile, and, most recently, Nokia, have been baiting these masses by releasing a spate of products and services that they call 3G. This "third generation" of cellular technology, after previous waves of analog and voice-only digital services, is supposed to combine voice with broadband packet data transmission delivering fast Web surfing, streaming video and audio, multimedia messaging, and other services. But while the radio technologies that American carriers have installed are technically 3G, the services are more akin to dial-up Internet than broadband. By focusing on video services too early operators risk undermining revenue per MB says a new report from telecoms watchers, Analysys. Operators in Japan and South Korea come under attack from Analysys for focusing on sophisticated mul[...]



Past, Present & Beyond: 2004 in Review: Apple Pie

Sun, 9 Jan 2005 06:30:20 -0500

Lately I have been cramped with work, getting less and less time for bloging my thoughts. 2005 new year celebrations went by and all blogs and journals in their last week of the past year as well as the first weeks of the new, critically analyzed and reviewed the events of 2004 and came out with their predictions on "what-to-expect"s in 2005. I too couldn't miss out on this opportunity, now can I ;) Though a tinge late, I will share my thoughts on the events which will possibly shape the things to come. CNET has a great series going on the 2004 year review and I will put my analysis on that. So as for this first part, I will concentrate on the change in Apple's strategy in the year from proprietary and rigid business model to a tinge open, flexible and adaptive model recognizing the change which the industry is undergoing. Well the start has been good enough, lets see what's in store ahead. I will follow CNET's commentary and add my feeds inline. For a change, I will try out a new layout this time :): iPod dwarfs iMac Apple Computer, which rolled the dice three years ago with a hand-size MP3 player the size of a deck of cards, came up boxcars. This year, Apple was largely doubling down on the bet it made in 2001. At Macworld Expo in January, Apple took the iPod and made it a mini. Sales of the iPod rivaled those of the Mac for much of the year, before ultimately dwarfing those of the Mac in the October quarter, at least in number of units sold. Expectations for holiday sales grew into the millions as the iPod topped Christmas wish lists.   iTunes - The right vibes? [In the last year,] Apple CEO Steve Jobs announced the company's plans to sell tunes to Windows users. And while the company hasn't magically converted them all to the Mac view of the world, it has made a pretty nice business for itself. Apple has sold tens of millions of songs and more than doubled the number of iPods it is selling. The Mac maker won't say how many of its songs or players are going to Windows users, but it's reasonable to think it's a pretty good chunk, given the relative prevalence of PCs. Apple has clearly established the iTunes Music Store as the standard for legitimate music sales. According to new data from the NPD Group, iTunes retained a 70 percent market share for digital downloads between December 2003 and July 2004, the last month for which data is available. "iTunes has set the standard in online music in terms of sales, usability, and in the quality of its library," said Yankee Group analyst Mike Goodman. "They're the ones who cracked the code, and everyone is following in their footsteps." While the success of iPods and iTunes is overwhelming, everyone is still skeptic on whether this will be the right techique to sell online music or is there any better option available? Well, as for me, with Microsoft following suit in a similar fashion with Windows Media Center, its definately going to be the standard for atleast the next few years. However [...]



Outsourcing - In and Out.

Sun, 26 Dec 2004 03:55:25 -0500

I came across an excellent article describing the opportunities, challenges, changing focus and sustainability of the outsourcing model currently making hay while the sun shines. The articles describes the emphasis on outsourcing companies on improving their productivity, protecting their markets from wannabes and addressing the biggest question of sustainability of their growth. So without much ado, lets directly analyze the same (my comments are inline): Instead of relying solely on captive centers or third party providers for their outsourcing needs, companies are increasingly turning to hybrid structures, says Ravi Aron, a professor of operations and information management at Wharton. "The debate over one or the other is really fading away, and firms are going toward what's called an 'extended organizational form' which brings together the strengths of the two models. It gives companies a way to say what they want done but also say how they want it done." Essentially, the client firm's managers act as very senior managers of a third-party provider. For instance, New York-based Office Tiger, a BPO solutions provider that has set up operations in Chennai, India, has a system through which companies can make day-to-day changes to processes, adding in verification layers. "I call this 'virtual prowling,'" says Aron. "In most captives, you are able to have a senior manager prowl the floor. So when a third-party provider gives you fine-grain analysis capability, you can still monitor all of these things." Thus, the client firm can see which teams are excelling at which processes - and start picking the composition of new teams based on that knowledge. Well what else can I say but "Hybridism" has made its way here to, and why not? Offer the clients selective services and they feel more secure and dynamic. Being transparent about the process does give one's customer the feeling of being in command and would inculcate greater trust relationship important for the long run. Even firms that swear by captive centers acknowledge that there is scope for more outsourcing. Peter Nag, vice president and head of the global program management office at Lehman Brothers, notes that Wall Street firms often go to captive sites in part because there's a disconnect in domain knowledge between the young managers in India and their older counterparts in the U.S. "We were able to offshore about 20% of our technology within the first year. But we couldn't get beyond that, because we had project managers in their 40s working with people in their 20s. Our projects were complex and proprietary, and we needed a high degree of control. But captive doesn't equal not outsourcing - they both do work and outsource, and it can open the way for more outsourcing once high quality work is proven." I feel domain knowledge transfer is one aspect of outsourcing that scares the West. However it becomes unavoidable in the case one has to reap the benefits of cheaper labour. Domain expertise commands high prices among firms specializing in outsourci[...]



Hybrids - the next big trend.

Sat, 25 Dec 2004 08:39:43 -0500

Gone are the days when a single approach is going to be adopted by all, a single technology that will attract most (let alone all), a single proposition being exciting for your clients. Customers now don't want to get fixed onto one standard and want a variety of choices available to them at any time to adopt to their changing needs and times. In the current technology industry where changes are part of the plan, it takes a lot to entice your customers to stick to your product range and offerings and trust you that they wont end up adopting to your changing supplies rather than they adopting to customers' changing demands.

The information technology industry especially has seeds of oligopoly sowed in pretty deep and the final consumers often feel forced into adopting a particular standard/format more cause of lack of options or imposed choices. One trend however which has already started to break this jinx is Hybrids. Hybrids - crossbreeds, in technology terms, can mean a product that can adopt to different technologies, inputs producing logically similar results respectively to those varied inputs. What Hybrids promote is their inherent assistance on the path to ultimate Convergence.

So we have hybrids in many technology industries, even outside IT. We have hybrid cars - fueled by oil or electricity. The very basic advantage of such a type of offering is freedom to choose. You are no longer bound to rising fuel prices affecting your monthly budgets. And once this variety becomes a standard, we can also see support "plug-ins" (as the techies might say ;) ) opening up to you even more choices. Then we have cell phones supporting different different frequencies to help you stick on to the same handset during international travels. We should even have CDMA-GSM hybrid for India. Later on we should be able to add VoIP, 3G onto the same, if required.

In IT, the trend is picking up. Solaris 10's support for native Linux applications is one of the biggest hybrid OS coming up. Cool advantage of your linux apps becoming executable on the Solaris box. IBM's Hybrid database supporting data in native and XML format. This feature surely will push the concept of liquid XML database further. Ultimately a single query could be used irrespective of the vendor of the database tool and only depend on your database design. Even iPOD's multi-format support, does allow one to choose between the proprietary formats or the general MP3 standards. Going ahead convergence in the form of Hybrids will probably be seen in the Television arena (Bittorrent, TiVo, Video-on-demand), gaming devices (one device - any vendor format - XBox, PS2, PSP, GameCube, N-Gage, PC), Web Search (text, videos, audios, shopping), computing (take away a part of your desktop as a smaller pocket PC probably :) ) amongst others.




Optimized Data File with inter-operatability

Sun, 19 Dec 2004 05:17:02 -0500

J. Scott Edwards has presented a nice technique of optimizing data files required by applications without compromising on inter-operability. I think it applies well to applications too, in addition to OS itself, on which he stresses:

My idea is to have all of the information stored on the disk in the native Object format of the program. That way instead of having to constantly convert and interpret data, the application can just access that object directly. And when that data is needed in a flat file format, you have a converter App (object) that can access the internal data and convert it to a flat file type of format. For example, let's say you have some compressed (with Ogg Vorbis or whatever) audio objects on your computer. And you want to burn an audio CD which can be played in a normal audio CD player. You would create a playlist object and connect the output (more on this later) to the input of a Ogg Vorbis converter object and then into the Audio CD burning object.

Though fairly basic, I haven't see many applications using it so far.




The changing face of Journalism

Sat, 18 Dec 2004 13:48:29 -0500

A couple of days back, I caught up on a nice little show on the indian news channel - Headlines Today, named "Top 5 today" which featured the renowned name in the media - Vir Sanghavi as the host and he were discussing on the changing face of indian and international journalism with reference to the intimate photographs of actors splashed on a mainstream newspaper. A few interesting thoughts he put forward:

  • Journalism today is more about people than issues. This is a global trend not just with reference to India.

  • If the photographs were that of a political figure or an industrialist, the media could have been accused of "unethical" journalism. But since it was concerning film actors, its all legal. Actors themselves provide entry to the press to their private parties and want their personal life details to be published in the media. They want to be talked about.

  • Journalism unfortunately is also a business and businesses have to earn profit. There are some people who practice very wise "ethical" journalism, so to say. Again unfortunately one of them goes bankrupt each year!




Innovation redefined

Sat, 18 Dec 2004 13:48:12 -0500

Rajesh Jain points out to an article by Michael Schrage which puts forward a different view on the definition of Innovation:

Innovation isn't what innovators do; it's what customers, clients, and people adopt. Innovation isn't about crafting brilliant ideas that change minds; it's about the distribution of usable artifacts that change behavior. Innovators-their optimistic arrogance notwithstanding-don't change the world; the users of their innovations do. That's not a subtle distinction.




Sun's Niagara and vision.

Sat, 4 Dec 2004 07:23:38 -0500

Almost as a continuation of the earlier article, here's how Sun is in such a better position than IBM. It already has its own OS - Solaris, (which I must point out till now is a bit subdued) and that already has the power to exploit the power of its hardware offerings. It is launching its Niagara processor by 2006: The Niagara chip has eight processing engines, or cores, each capable of running four simultaneous instruction sequences, or threads. Though it lacks circuitry to maximize the speed with which a given thread will run, Sun expects the chip to be useful for replacing large numbers of lower-end servers. Niagara is a crucial part of Sun's attempt to keep the Sparc family of processors relevant in the face of widely used x86 chips from Intel and Advanced Micro Devices and increasingly powerful Power processors from IBM. Niagara was spawned at start-up Afara Websystems, which Sun acquired in 2002. Each processor core on the chip juggles four threads, switching from one to another when one is held up by slow communications with the computer's main memory. Sun is touting the processor as a solution to power consumption woes in corporate data centers. Each Niagara processor consumes 56 watts. By contrast, it's not unusual for a high-end server chip to use between 80 watts and 120 watts. Also, an older CNET article points out that they are banking on how large central servers will really be the key area for chip manufacturers as thin clients compromise on computing power a bit. I would endorse the view. Sun's throughput computing plan is designed to vastly increase the power of servers and thus to reclaim momentum Sun has lost to Intel. The technique, which won't result in chips larger than those from competitors, sacrifices the ability to perform one task extremely quickly for the ability to do multiple independent tasks simultaneously. Sun has changed dramatically in the last year, dropping its argument that its Solaris operating system and UltraSparc processors are sufficient for all computing needs and letting the Linux operating system and Intel processors into its product line. But essentially the company is sticking by one of its mainstay principles: Leave the computing work to large central servers, not to desktop machines. In McNealy's vision, rather than each person having his or her own desktop computer, many people will share centralized servers. They'll carry not laptops but tokens that will grant them access to their private computing resources. "The shared resource model blows the doors off" the dedicated model, McNealy said. As an alternative to PCs, Sun has loudly trumpeted its Sun Ray system, which does no processing on its own but instead relies on a central server. Sun is working on a future version called WAN Ray that can use wide-area network technology such as DSL lines or cable modems to connect to the server, McNealy said. Ultima[...]



Chips don't matter if they don't have software

Sat, 4 Dec 2004 07:22:24 -0500

Jonathan Schwartz had quoted this in a very interesting article describing IBM's failure to push its Chip segment due to lack of a good software strategy for it. This particular thought is pretty interesting when the Chip industry is at this juncture and Chip manufacturers are trying to either define their own segment, or adapting to a successful segment, or trying to beat the leader at his own game. Intel, AMD, Sun, IBM and many others in the race. Jonathan points out to an earlier article by him describing how IBM found itself quite a laggard in the race. I will snip out some key points... IBM CEO, John Akers. Akers and his staff had the wisdom to enter the PC market in its early days, but the short sightedness to suggest customers source their PC operating system from a little company in the Pacific northwest. The company turned into Microsoft, and they continue to generously return the fruits of their coup to their stockholders. A few years back, IBM and HP both hopped onto the social movement called linux. It's a wonderful movement. But the bad news for IBM is that the vast majority of enterprise datacenter deployments are now occurring on Red Hat's linux. And with Red Hat increasing price, while adding in an application server that competes with WebSphere, IBM's finding itself in the uncomfortable position of having lost control of the social movement they were hoping to monetize. They're beginning to look like the IBM of Mr. Akers's era - having missed the forest for a tree, and finding themselves without an operating system. And with most enterprises having picked Red Hat on IBM's recommendation, IBM now clumsily realizes it's invited the fox into the hen house. With Red Hat running on the majority of IBM's proprietary hardware, Red Hat can now direct those customers to HP and Dell. Even Sun. Now if you're an IBM customer, you've probably received (or should prepare to receive) the pitch from IBM incenting you to move off Red Hat to SuSe. Bringing in SuSe at the last minute isn't having nearly the effect IBM desires - at least from the customers, developers (and press) I speak with. Moving from Red Hat Enterprise Server to SuSe's Enterprise Linux is very complicated (eg, which application server do you pick?), and with IBM's consulting bill, very expensive. IBM is in a real pickle. Red Hat's dominance leaves IBM almost entirely dependent upon SuSe/Novell. Whoever owns Novell controls the OS on which IBM's future depends. Now that's an interesting thought, isn't it? I'd keep a close eye on the Novell/SuSe conversation. If IBM acquires them, the community outrage and customer disaffection is going to be epic... but where else does IBM go? And the next quotes are really a belting (featuring in the formal link)... I'm watching with amusement as IBM prepares to stub its toe with their new, curiously named &qu[...]



Now a markup language for security - AVDL.

Mon, 29 Nov 2004 12:44:03 -0500

SOAP has proved that inter-operability can be achieved with standardization. Neither CORBA nor DCOM could achieve it, both of them didn't quite have the bulk support as did SOAP. And the raw bases of SOAP are XML and Industry wide acceptance. Now security has gathered much pace these past few years, escalated by flaws found in MS software mostly and also the bigger worms, spam bots doing the rounds. Security is probably the least standardized area of IT maybe. We all have heard of firewalls and IDSs(Intrusion Detection Systems), but there has never been a dedicated protocol, language or framework for it. Every vendor simply defines security in his own way and the clients have to adapt to it. OASIS(Organization for the Advancement of Structured Information Standards) has come out with a new security interoperability standard AVDL (Application Vulnerability Description Language). Well this new standard seems to have atleast 2 of benefits of SOAP - XML data and broader industry acceptance. More from Net-Security and related... The Application Vulnerability Description Language (AVDL) is a rather new security interoperability standard within the Organization for the Advancement of Structured Information Standards (OASIS) that was first proposed in April 2003 by several leaders within the application security space. AVDL creates a uniform way of describing application security vulnerabilities using XML. With dozens of security patches and application level vulnerabilities released each week, enterprises must deal with a constant flood of new security patches from their application and infrastructure vendors. To make matters worse, network level security products do little to protect against these vulnerabilities at the application level. To address this problem, enterprises today have deployed a host of best-of-breed security products to discover application vulnerabilities, block application-layer attacks, repair vulnerable web sites, distribute patches and manage security events. Enterprises view application security as a continuous lifecycle. Unfortunately, there is currently no standard way for these products to communicate with each other, making the overall security management process far too linear, manual and time-consuming. Enterprise customers are asking companies to provide products that interoperate. A consistent way to describe application security vulnerabilities via XML is a significant step towards that goal. Today, these vendors proposing AVDL are actively engaged in projects whereby XML-based vulnerability descriptions will be used to improve the responsiveness and effectiveness of attack prevention, event correlation, and remediation technologies. XML establishes a common framework, but XML alone does not ensure vendor interoperability. AVDL Benefits Throughout the Application Lifecycle: Developers and Quality Assurance During the application developme[...]



Patents: The madness continues.

Mon, 29 Nov 2004 12:41:30 -0500

As if the earlier ones were not bad enough, we have another round of broad patents, this time around on the very basic Web Services concept, going on "Auction". I mean what could be worse than allowing the general public a chance to take a risk of screwing the established organizations. It's really silly that companies are coming together to buy the patents themselves (they will retire it, as mentioned on CNET). This news would surely send the bids much higher and the chances of rigging would just exaggerate now. Craig Smith, the founder of CommerceNet, the company proposing to buy and retire the patents with the help of fundings from other established companies has very humorously put it, "It's a little bit like paying the blackmailer before they have something to blackmail you about."




Solaris - will it finally become the 3rd OS in version 10?

Sat, 20 Nov 2004 14:35:46 -0500

Solaris is one of the oldest Operating Systems around, but not many talk about it or take it seriously (thanks to the media). Windows and Linux, Linux and Windows, RedHat and Microsoft and SUSE - thats what we all keep hearing about. Solaris is always treated as an also-run along with the likes of HP-UX, IBM's AIX etc. It really wasn't clear whether Sun was phasing it out from its product line or preparing for something big. Perhaps something as big as Solaris 10. Solaris 10 is great on paper. The new features are quite tempting for me atleast. It seems like a big last effort from the software major to finally make it the 3rd OS of the media world. 1st and the 2nd being MS Windows and RedHat/SUSE Linux ofcourse. I tend to take their names together many times simply cause I think their future lies in them working close to each other and developing a standards based Linux. If they drift apart they are more likely to end up 3rd and 4th, with someone taking over the 2nd position; someone like Solaris. What I like most about Sun Microsystem's approach to the development of Java was its controlled development. Even though RedHat and IBM kept pushing Sun to make Java open source and extensible, it would really have it better if there was a centralized controlled development of the platform. And it is what Sun chose, and it is what made it special and made Java all work right. If there were different flavors of Java available today as is the case with Linux, J2EE would be a very distant 2nd to MS's .NET platform. But as luck would have it, the 1st and 2nd position in the development platform race is really debatable and infact J2EE commands a better position today, according to me. So back to Solaris 10, Sun Microsystem will make it Open Source, but in Java style, the development process will continue to be controlled by Sun Microsystem itself. And the story doesn't stop there. Along with making it Open source under a specific license, Sun Microsystem will also launch a  patent protection plan as CNET reports: When Sun Microsystems releases Solaris as open-source software, it plans to provide legal protection from patent-infringement suits to outsiders using or developing the operating system--one of several ways Sun hopes to make Solaris more competitive with Linux. "You should have a company that can protect you and take that $92 million bullet," Scott McNealy said. Sun also has an arsenal of patents it can use as the basis for countersuits against computing companies, he said, adding that "most people with network-computing intellectual property probably don't want to come after us, because we might go right after them." But open-source developers using Solaris technology need not fear that Sun's patent arsenal will be used against them, Sun President Jonathan Schwartz said. "It is not our intent to say, 'Here is our intellec[...]



Microsoft Takes Lead in Software For Handhelds

Sat, 13 Nov 2004 10:37:18 -0500

Yahoo and USAToday confirm that Microsoft is outselling PalmSource in handheld software. Now, this really comes to the quality of the software, its user-friendliness, better compatibility/inter-operability with your home PC ;), application variety and ofcourse marketing. If Linux, Sun, Palm now cry over it, it will really be a pity. I think some of MS's rivals are just not pushing hard enough. There is Apple who raced with his iPOD and Google who just exploited on the lesser looked into Search technology. Good luck to them in maintaining their leads. But there are so many markets where MS is just wiping out the others, I would certainly give it a lot of credit on its achievements. More from those articles...

Microsoft overtook PalmSource in the third quarter as the world's biggest operating system for handheld computers, a survey showed on Friday. Microsoft's Windows operating system accounted for 48.1% of worldwide shipments of personal digital assistants (PDAs), up from 41.2% in year-ago period, according to July-September statistics from research group Gartner.

Palm's share dropped to 29.8% in the third quarter of 2004 from 46.9% in the same period last year. Canada's Research In Motion, which produces the hardware and software for its popular BlackBerry wireless e-mail devices, was a strong third, quadrupling its global market share in twelve months to 19.8% from 4.9%.

Worldwide shipments of personal digital assistants (PDAs) increased 13.6% to 2.86 million units. Linux remained a distant fourth and lost market share as it is running on only 0.9% of handheld computers, down from 1.9% a year ago.

The handheld computer market is competing with the faster growing smartphone market, which is expected to double to 20 million units this year. Symbian provides the dominant software in that market segment of advanced mobile phones which can run computer-like applications like navigation software and email.

Symbian looks weak to me, cause its got the same problems as Palm. Windows is already in the smart phone market. Linux is as usual a laggard. One more run for Microsoft soon? Let's wait and watch.




Sun on Linux

Sat, 13 Nov 2004 10:36:50 -0500

Perhaps one of Sun's biggest bet was always the emergence of multiple operating system. Java - a platform independent language (or even a platform) was always banked on multiple OSs working together towards a combined information delivery model. Perhaps the reason that it was the choice of platform on NASA's Mars Rover proved that consistently. LinuxWorld has an interview with John Loiacono, executive vice president of Sun Microsystems over Sun's Linux stratergy and a bit more... There are two different questions that you have asked, maybe three. What is Sun's viewpoint on open source? What is Sun's viewpoint on Linux? What is Sun's viewpoint on Red Hat? Sun was founded on the principle of open source. We have contributed more lines of open source code than any other entity on the planet except for Cal Berkeley. NetBeans, Sun Grid Engine, OpenOffice, and Solaris are all technologies that use the open source process, and we will continue to do so. We'll remain a heavy contributor on the open source front, and it will remain a key component of how we develop software. People don't realize today that a huge portion of Solaris is open source. For example, today we use GNOME as our desktop environment. We use Mozilla. We have integrated Apache. We have SAMBA. All of these pieces of software are a part of Solaris today. Some people think that open source is new to Sun and that we don't get it. We are a pioneer. Sun, I think, hasn't been very successful with its own product range - NetBeans, Sun Grid Engine, OpenOffice, and Solaris. However it is still banking big on Solaris. Infact the moment he seperates out the internals of Solaris into GNOME, Mozilla, Apache, SAMBA etc. we see all success stories. We firmly believe that Linux (server and desktop) is an x86/AMD phenomenon. We believe that this will continue. Understanding that it does run on other architectures, that 99% of the volume generated in the Linux space is on x86. We think that Linux will continue to be a big player, including on the desktop where people are concerned about cost and want an alternative to Windows. Linux is something that we'll have to interoperate with because it may exist far beyond whatever Solaris turns out to be. We are in favor of Linux. We think that the Linux movement is great and that the open source process is great. We are leveraging open source in our software stack where it makes sense. Perhaps the most key takeaway from the whole interview - x86/AMD phenomenon. Linux is so much suited for other platforms, even embedded ones but what it is known for is the x86 market. This I think will have to change sooner and Linux has to be thought more in terms of a machine operating systems (read mobiles, palms, television, cars, machinery, embedded, embedded) rather than just a desktop system. However, we[...]



Addressable Television

Sun, 31 Oct 2004 04:52:14 -0500

Will of the TVHarmony weblog has posted some excellent thoughts of On-demand Television, on-demand video or whatever you might want to call it (he calls it Addressable Television). I submit to his views completely as I will bring out the highlights of his blog: I like the term "addressable television" to describe the ability to get television content in a similiar fashion as getting web content. The area of disagreement is which technology is going to "win". Here are the contenders as the group saw it: * Video on Demand (VOD) * TV delivered via phone lines (IPTV) * Video on the Internet (Streaming) * Downloadable Internet (BitTorrent) Many of the crowd there found the BitTorrent model compelling, citing the history of the music industry and napster as likely to be repeated for video. I tend to agree that to a certain extend, this is already happening, with people avoiding copyright law and putting up content on the web, and the roadblocks from moving video streams from a DVR to the internet are quickly eroding. Here's the basic point: I think the advent of the media-centric PC will cause this trend to accelerate. If my family room is driven by a PC with a DVR, set top box, and web browser built into it, connected to cable for both programming and high speed data, and then connected to a nice big flat panel display, the option to watch a show via live TV, VOD, DVR, or Bit Torrent is just a click of the remote. While I agree that is a compelling, I think there are hurdles to make this vision work in the long term. I think they will be overcome, but for a large percentage of the population, VOD, especially if it expands to becoming a centralized DVR, is likely going to be the easier solution. First, I think the battle will ultimately be played out on HDTV. The cost of HDTV is getting lower each day, and more and more people are buying HDTV-ready sets. More and more content is being delivered in HDTV format, and it won't take too much time before people demand HDTV streams as a viewing preference. Second, I view video's relationship with people different than the relationship people have with music. People listen to music over and over again, but in general, video is a single use commodity for the most part (I have a 3 year old daughter so I can tell you there are exceptions to that rule). This changes the calculus slightly in that the pain to download a video has to be less than the pain to download a music track, or it doesn't seem worth it. Third, there is a shelf life issue. Music is fairly easy to store and since people listen to it over and over again, it has a long shelf life on a networked computer. A lot of video content has an expiration date and while[...]



Trend indications from the CEOs: John Chambers, Craig

Sun, 31 Oct 2004 04:51:25 -0500

I consider it a good habit to listen to some of the influential people in the software industry, their thoughts on the future progress of the industry. Not only many of these guys' views have an impact on the industry as a whole, but their knowledge and experience make them think before they speak and they many a times are in a position to analyze things better than the media or any other developer. I would have to add that its not always the case too. I have a select set of people who's thoughts are quite in sync with mine on future trends of computing and software in general, chief among them - Bill Gates and Scott McNealy. So, in this particular annual gathering of tech professionals at the Gartner Symposium and Information Technology Expo, I caught onto Yahoo's summarized article on some of the chief issues discussed. My comments are inline. John Chambers (CEO of Cisco Systems) Q: Cisco has opened a research and development center in China and launched a venture fund in India. Do you expect most of your growth to be outside the USA? A: The majority of our job growth will be in America. In China, we're adding 95 jobs in one to two years. That's normal growth. We're a little different (than many other tech companies) in that we want to keep the majority of our jobs here. Q: Why? Wouldn't sending jobs outside the USA save costs? A: It's good business to try to do right by your employees. We try to treat our people like we would like to be treated. We want balanced growth globally. We're very open with our employees about that. Unlike software firms, their hardware counterparts are less likely to catch on to the outsourcing burst I feel. This is because while you just need connectivity to be able to develop, maintain, test software offshore, its not that easy for hardware. Especially in case of India, the hardware leaps are just not progressing at the same rate as software. The infrastructure setup is partly to be blamed, but overall the industry has not picked up much due to shortage of local demand. Q: Cisco has made a number of announcements related to security, including partnerships with Microsoft and IBM. So far, security has largely been relegated to software companies, and some analysts say hardware makers need to do more. What role should hardware makers play? A: There should be relatively open standards so that a consortium of companies (can work together). We work with our software application partners and even our competitors. We can get along with IBM and Microsoft and Sun (Microsystems). The industry as a whole has to work on it. It's our biggest opportunity and our biggest challenge. Somehow many of the industry problem always ends up at this stage - lack of standardization. It's a[...]



A good thought from Linus Torvald.

Sun, 31 Oct 2004 04:50:52 -0500

In an excerpt from an email interview with Linus Torvald, I came across a good little advice from him for start-ups and I think it really is quite logical, though a little uncommon,

Nobody should start to undertake a large project. You start with a small _trivial_ project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision.

So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way _useful_ first, and then others will say "hey, that _almost_ works for me", and they'll get involved in the project.

And if there is anything I've learnt from Linux, it's that projects have a life of their own, and you should _not_ try to enforce your "vision" too strongly on them. Most often you're wrong anyway, and if you're not flexible and willing to take input from others (and willing to change direction when it turned out your vision was flawed), you'll never get anything good done.

In other words, be willing to admit your mistakes, and don't expect to get anywhere big in any kind of short timeframe. I've been doing Linux for thirteen years, and I expect to do it for quite some time still. If I had _expected_ to do something that big, I'd never have started. It started out small and insignificant, and that's how I thought about it.




How Microsoft lost the API War.

Sat, 23 Oct 2004 05:27:26 -0400

This is a pretty old article, but pretty interesting none the less. The lessons to be learnt from it is still valid in the present scenario. Joel Spolsky notes down some of his thoughts on how Microsoft eventually lost his stronghold developer support on Win32 API. I think it was always eventually on cards, but definitely some of the decisions of MS, lead to a speedier defeat. My comments are inline. Microsoft's crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft's monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft's income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet. Remember the definition of an operating system? It's the thing that manages a computer's resources so that application programs can run. People don't really care much about operating systems; they care about those application programs that the operating system makes possible. Word Processors. Instant Messaging. Email. Accounts Payable. Web sites with pictures of Paris Hilton. By itself, an operating system is not that useful. People buy operating systems because of the useful applications that run on it. And therefore the most useful operating system is the one that has the most useful applications. The logical conclusion of this is that if you're trying to sell operating systems, the most important thing to do is make software developers want to develop software for your operating system I quite agree to this. Infact my thoughts are quite in sync with Joel. It's the reason MS' Windows is so hard to replace, even with a better OS, let alone a not so user-friendly one. Infact, the open sourcing of Linux and IBM's Eclipse was the only possible move to attract developers to adapt to the new thing, which otherwise would end up in the same state as Unix today. Why Apple and Sun Can't Sell Computers? Because Apple and Sun computers don't run Windows programs, or, if they do, it's in some kind of expensive emulation mode that doesn't work so great. Remember, people buy computers for the applications that they run, and there's so much more great desktop software available for Windows than Mac that it's very hard to be a Mac user. And that's why the Windows API is such an important asset to Microsoft. Quite in sync one could look at the mobile market. It was dominated by Symbian, but the lack of applications (plug n play, if you may) on that platform, I think didn't make it a must-have for your mobile. Java hit the market [...]



Some findings on Nanotechnology.

Sat, 23 Oct 2004 05:26:49 -0400

I guess this articles are only for people like me who are newbies to the Nanotechnology world. I came across a BBC News' article about the discovery of a new nanofabric called Graphene. More from the same... Called graphene, it is a two-dimensional, giant, flat molecule which is still only the thickness of an atom. The nanofabric's remarkable electronic properties mean that an ultra-fast and stable transistor could be made. Scientists have been trying to exploit this for computing because smaller transistors mean the distances electrons have to travel become shorter, meaning faster speeds. Conventional transistors rely on the semi-conducting characteristics of silicon which provide the switches that change the flow of current in computers and other electronics. "All the recent progress has been on nanotubes for transistors. These are sheets of graphite molecules wrapped in a cylinder - like a chocolate cylinder you stick in your ice cream," explained Professor Laurence Eaves. "Although these are interesting, because they are one-dimensional, they have limitations. Graphene is a plane transistor - flat sheets.". Professor Andre Geim, who leads the research team, explained that the material they have discovered could be thought of as millions of unrolled carbon nanotubes which have been stuck together to make an infinitely large sheet, an atom thick. They showed that electrons could travel sub-micron distances without being scattered, which means fast-switching transistors. He added: "People have been trying to make transistors faster and smaller. There is a Holy Grail of electronics that engineers call ballistic transistors - ultimately faster than anything.". A ballistic transistor is where electrons can shoot through without collisions, like a bullet. In other words, they have what is called a long mean free path - the distance a molecule travels without colliding into another. Greater distances with nothing to collide with means faster speeds. Fewer collisions means less energy is lost or given off too. Although they have not demonstrated a ballistic transistor yet, their experiments have shown that the material could, in theory, produce one. I also ventured further into the quest for a little more knowledge on this emerging new field and can across a beautiful presentation from the same site. Some excerpts from the same presentation explaining the gist of Nanotechnology and the diverse uses of it: Nanotechnology concerns materials and working devices that are engineered at the scale of atoms and molecules. Advances in nanotech will impact electronics and computing, medicine, cosmetics,[...]