Last Build Date: Sat, 28 Jul 2007 13:05:30 -0400Copyright: Copyright 2007
Mon, 25 Sep 2006 13:01:11 -0400I do a lot of reading - both online and offline. I am a lot into Audio Books these days, which I feel is a real good utility when you are commuting to and from work. I thought of listing today, some stuff that I really enjoy reading and also recommend people like me to try them out:
Sun, 9 Apr 2006 11:26:04 -0400Recently with the spark of plenty of Social Communication Tools like Wiki, Blogs and the like; it seems highly likely that emails will now be the last choice for collaboration amongst teams in organizations (especially Software and related ones). However I am not seeing a complete solution dedicated for Team Collaboration or maybe I am a little hard to please ;). I thought over my 'wishlist' of features for such a tool which I think will be beneficial to a very large section of groups employed in various techie, non-techie and even social organization for their project management and team collaboration requirements. Ok, so now my concept revolves around empowering the lower and middle management, down to the last employee in the hierarchy to manage and inform the team members about day-to-day progress and making it easier for top management to drill up and down the required granularity of information and status needed. So let's say the tool is web-based where each team member has an account and he/she is part of a project. On a daily basis, each participant updates the progress status of the activity he/she is involved in and managers/leads responsible for project milestones update that information when met and such other things. The tool should provide: Categorizing activities/tasks: Each activity an individual is working on has to fall under a certain category and there has to be a hierarchy of such activities providing various levels of view for management and team members alike. Imagine a software project. Here there can be various top level categorization of a project's progress like Product/Feature Conceptualization, Product Design, Implementation, Testing, Deployment, Customer Support etc. Let's call this Level 1 categorization. Each of these can be further drilled down to still more granular categories like for Product Conceptualization might be either Market Research findings, Prototyping, Feasibility Analysis etc. This will be Level 2. And we can have n number of levels even further dividing the activities towards more precision. Now each individual, say a Software Engineer, would update the status of an activity daily. So say he/she is assigned certain module to develop OR certain bugs to fix; he/she does that and updates the system. Now based on these individual updates the system can give a holistic picture of what's going on in the project. Now in the above chart (assume that it is an incremental development methodology). The top management and members sees the progress in each criteria of the project and they can further drill down to get details in each category. Also the same chart can have milestones marked out separately for each category and granularity. Say, development sub-team has a milestone to release beta version 0.8 of a product and testing has a milestone of loading and functional testing the same. With the changing nature of the industry, more and more beta releases are hitting the market, plus not to mention beta releases to internal customers, within the organization, if any. This might give a good view of things. Recording Everything: The biggest advantage that Wikis and Bloging is providing is archiving and easy access to data. The best usage of this would be to record everything. Like say a Design documents of a Component being developed. Now this document can be pressed into the system on a particular date, appropriately linking it as a milestone or task deliverable under a particular category. Now a person from any team needing that document, can hunt for it in the specific stream (category - Design) at any level of granularity, s/he can further do a granular search based on its time-range of publishing OR a specific keyword. It can also be used for tracking dependent/related activities (as explained in the next point). But textual data is not the only input that should be captured. Let's face it, its the informal discussions and sometimes client [...]
Sat, 18 Feb 2006 04:33:05 -0500
Sat, 4 Feb 2006 06:11:27 -0500
Sat, 5 Nov 2005 05:35:10 -0500
Sun, 9 Oct 2005 11:04:06 -0400John Seely Brown is the chief scientist at Xerox corporations. In the Thought Leader Forum of the year 2000, he had made some amazing predictions that have come true even in terms of the timeline he calculated. I am highly impressed with his vision of the future convergence of various scientific discipline and crafts, what he terms it as 'Judo'. Following are some of his important predictions from the same article: The confluence of three laws drives the new digital power and underlies the exponential pace of change. The first law is the law of communication, or the Fiber Law. This states that the capacity of fiber networks is doubling every nine or twelve months. The second law is Moore's Law, which states that the capacity of computational systems per dollar doubles every 18 months. The third law, which replaces Metcalfe's Law, is called the Law of Community. Metcalfe's Law states that the value of a network is the square of the number of members of that network. [The third law], however, was created in a world of local area networks. If we move to wide area networks such as the Internet, you are then looking at systems that support virtual communities. So, out of n people, how many virtual communities can you create? You can actually create 2n communities, versus the n2 number of relationships between n individuals. And if you take into account the web crawlers interconnecting these communities, you create a 2n2 relationship. This is a number astronomically larger than Metcalfe's law predicts. I believe it is this law that is driving the explosive increase of traffic on Internet backbone today. If you look at the number of communities, not the number of people, and recognize that they are also talking to each other through web crawlers and intelligent agents, then you begin to get a sense of the invisible dynamic that is driving unprecedented demand on the internet. We are learning how to bring bits and atoms together to create smart systems, systems that will experience the same cost curves as Moore's Law. But also think about what our world would be like if cars and planes actually followed Moore's first law. Nice metaphor, of course; but that is all it is. Well, what I am suggesting here is that the metaphor may turn into a reality when bits and atoms start to merge. If so, we are about to enter a new era affecting every material aspect of life. The Internet will pass through three phases. The first phase is the one we are all familiar with: a network of networks with computers talking to each other. Slightly more interesting is the second phase in which the Internet emerges as a medium. Over the next five or ten years, this will radically change what we conceive of as entertainment. [Kshitij: This mind you has come true in the exact duration specified, this is 2005]In the next ten years, we will see the Internet emerge as a self-aware fabric that is always in the background of our lives. Wireless will have a tremendous impact on not only the last 100 feet of the media phase, but also the last yard or the last inch of the fabric phase where thousands or perhaps even millions of little devices get tied together without tethering them with cables. As we move forward, we will need to shift systematically away from computer science metaphors to biological metaphors. A real challenge will be designing a computational immune system into the immense and complex networks. We must accept that these networks will not work perfectly all the time. Viruses are here to stay. As we build computational infrastructures that serve as the lifeblood of the world we must design them to be as robust as our own biological systems. This will require something equivalent to a computational immune system that can detect a virus and instantly attack it. The next major frontier for us is symbiotic computing. The first generation of computing was personal computing, followed by social computi[...]
Sat, 8 Oct 2005 14:50:44 -0400
Sat, 8 Oct 2005 14:29:39 -0400
Fri, 9 Sep 2005 14:20:24 -0400
Fri, 9 Sep 2005 13:51:08 -0400
To overcome the same will probably require a device that can cater to
solve the problem (2) of the young professionals and problem (3) for the older
generations. Devices like
Sony's Librie, which uses the
E-Ink technology are a
good attempt towards it. However these devices have to come out with a
standardized format (like the mp3) and multiple devices supporting the same will
follow. The competition ofcourse benefiting the users with reasonable pricing
and quality. However it doesn't seem to be happening anytime soon (everything
online is just taken for granted somehow) and so Print Media can rest at ease.
Sat, 3 Sep 2005 15:16:52 -0400
Wed, 31 Aug 2005 13:53:16 -0400
Wed, 31 Aug 2005 13:31:15 -0400
Sat, 18 Jun 2005 10:56:18 -0400Another trend which I anticipated about Apple's Mac OS being ported to the x86 hardware comes alive.
Sat, 18 Jun 2005 10:48:56 -0400
From VC blog:
I think Google has become so mainstream and so ubiquitous in our everday Internet lives that its lost its mojo in some ways. That doesn't mean it won't continue to be hugely relevant, hugely profitable, and hugely important. But it does mean that there's a vacuum that can get filled by others who are small, innovative, new, and exciting.
Google has recently launched some very attractive web services like Google Local and Google Maps. Their SMS service is a killer app for cell phones. It seems like they are launching a new web service every week. It's so fast and furious that it is making my head spin. But I don't understand how all of these new web services have anything to do with their core business of targeting advertising via search and contextual advertising. Do these services create more inventory for them to sell? Do they generate more data that allows Google to increase the relevance of the advertising? In some cases, like Local and Maps, I see the logic. In many other cases, it just seems like a laboratory turning out cool stuff and seeing what sticks.
And while they crank out more and more new stuff, their two core products, Search and Adsense, seem to be suffering from a lack of innovation. Adsense doesn't perform very well for publishers. So much so that many publishers are turning back to banners. And Google is also turning to banners. It's back to the future. That's not innovation.
But size is the enemy of efficiency and innovation. And Google has become a very big company very quickly. They are in Starbucks and McDonalds company now. That's great for them but its also great news for the little guy like Joe who can make a better cup of coffee or a better web service.
I agree in totality. Size affects decision making, which affects innovation, which negatively impacts the driving force of interest and achievement in the company leading to lower productivity. I always have rated innovative product companies higher than monotonous outsourcing giants. It is a matter of great interest, that many smart companies continue to remain small in staff and grow exponentially at the same time.
Sat, 18 Jun 2005 10:45:08 -0400
Impulse information is something that you need within a few seconds of thinking of it. If it takes too much time, then your addiction and impulse wears off. You want to find that one thing. You want to find it fast. You want to find it now. You know what it is you are craving. The challenge is just to get it quickly. You don't want to browse through a lot of pages. You don't want to sift through irrelevant content. You don't want to be bogged down by massive hierarchical structures. You want something flat, quick, and satisfying.
She has defined it in the concept of Web Search in particular, but it is more and more relevant for all modern applications as well. Sometimes a user has seen some information and wants a link to something interesting about it, which it knows is present in the same system. Detecting this trend and accepting and processing random, ad-hoc user queries is top priority these days.
Sat, 18 Jun 2005 10:44:07 -0400
Amy Wohl comes up her list of Good and Bad SaaS implementation ideas:
1. Net native applications, written to be delivered from a shared server, across the web, to a diverse population of customers who will be able to administer their own accounts.
2. Applications for which there is no differentiating value to your organization.
3. Applications which you need only occasionally (or which only a few of your employees use regularly) but which are expensive to install and support.
1. The application is mission critical so that your IT department and your senior executives are nervous, very nervous, about letting it (or its data) be anywhere outside of their complete control.
2. Your application requires a great deal of customization.
3. Your favorite application was never designed to be run in a multi-user environment and forcing it to do so makes it very expensive or very slow.
I think point 3 from the Good list will be driving force for near future SaaS applications. It will be like the recent Sun's effort of selling Grid Computing for $1 per hour. Although it was not exactly 'Software' that was sold, the setup would be far from the reaches of small enterprises.
I disagree however on point 2 of the Bad list. I think customization can be provided, it is just how the system is designed. It will be plugable, one solution cannot fit all, applications which are given as a service will form part of the solution and not the total solution. The selection of the services in turn will make it highly customizable.
Sat, 18 Jun 2005 10:43:29 -0400
Office documents to go XML. That will be the new default format in which MS Office is going to save in. This will surely make them highly searchable. Should PDF follow suit? Should there be one general format that desktop search tools can optimize on, to get information? Well it seems XML will greatly simplify that.
But it will be once again Raw data splashed over I guess, cause it will be something like specifying that this data belongs to the same paragraph, or this text is bold, this font, this background, something like HTML? Annotation can really help searching relevant information and XML can really help in annotating content, even images. If used effectively XML can very well be what the world defines as Information - data with a context.
Sat, 18 Jun 2005 10:42:47 -0400Bluetooth is a technology that was hyped quite a lot during its early days of discovery, but it is now that companies are coming up with products using the same and surprisingly the rate is increasing even with WiFi and WiMax (in the near future) in the picture. Bluetooth recently was used for connecting mobile phones to landlines whenever a network is available, like at home or office. Given the limited range of Bluetooth, it is surprising that cell phone manufacturers just want to stick to it more and more.
Sat, 28 May 2005 06:58:24 -0400With market maturity, innovation takes a backseat. Philip Lay, in the recent 'Software 2005' conference, brought forward some excellent insights on the roles and types of innovation in the pre and post Mature Main Street phase of the TALC. Disruptive innovation kicks of the first phase of TALC, the early adopters. An Example of this is VoIP. I see even RFID and RSS fitting in this category. Next comes Application Innovation in the Bowling Alley phase. These are immediate raw low level services offered with the usage of the disruptive technologies. Examples include SMS, and I feel the current WAP sites, Weblogs and even Google APIs coming under this. In the Tornado, the phase where things get rolling really fast, we see Product Innovations, and that is exactly where current consumer electronics and embedded systems lie. Examples include iPOD, TiVo, maybe even Yahoo services under the common ID mechanism. I doubt whether Google would come under this or not, cause its basically just search till now. Platform Innovation is what Software 2005's industry visionaries and leaders think that software technology as a whole has reached and it is prevalent, as Philip says, in the early Main Street phase. Examples include Relational databases, sighting Oracle among this. I think this is surely where Google, people believe, is headed - its own Web OS platform, but yet unproven. Even J2EE, .NET application development frameworks would fall under this and embedded systems would reach it, hopefully soon. That was pre Main Street. In the mature markets, Philip sights 2 broad fronts on which Innovations divide themselves up. First is the Customer Intimacy phase and it includes: Line Extension Innovations are where the product is presented in different flavors for different needs. Examples he has enlisted include Inkjet printers (HP) and Servers (Sun). However I feel Sun really deserves a lot of credit for the plethora of features it has come in its version 10 offering, certainly more than a Line Extension. Enhancement Innovation is when the product is given minor enhancements as in the case of Mainframes (IBM) and Laptops (Sony) as noted by him. Marketing Innovation, needless to say, is where the product is just given a feel-good factor with building extra facilities around it. Examples include Dedicated storefronts (Apple). Experiential Innovation, as in the case of Executive Dashboards (Cognos) and Mediated Internet (AOL), is just an experimental addition to the product perhaps without a real market insight or to test out the market. The other category of Innovations belong to the Operational Excellence Zone. These are enhancements in the routine operational usage of the system and include: Value Engineering Innovation: Example, Storage ATA RAID (Nexsan). These fulfill certain non-functional requirements like Security, Scalability and Failover. Process Innovation, where the workflow is enhanced as per the changing trends and market experiences, e.g. Online Retail Checkout (Amazon) Integration Innovation is perhaps the most challenging and the buzzword these days. Integration Innovation includes integrating your software with existing systems in place to help transparent portability for the enterprise to your application. It also includes making your application as a plug-in for the existing platforms and also allowing other applications to plug-in to yours by generic APIs. Examples include ERP/SCM/CRM systems (SAP), Semiconductor chips (Intel). This perhaps comes n[...]
Sat, 28 May 2005 06:55:04 -0400
In the recent 'Software 2005' conference, the focus was on the understanding the changing software ecosystem in the
Sat, 28 May 2005 06:53:27 -0400
Geoffrey Moore, I feel, is the kind of person who builds up from his model or his established standards, like Chris Anderson does with the Long Tail. His 'Technology Adoption Lifecycle' (TALC) model is well established now and on his recent presentation at the OSBC, he mapped the same to Open Source Products and put up the question of where they stand on it, concluding that it is perhaps in the 'Tornado' phase.
In the same presentation, he brought up interesting terms in Product development phase, naming them as "Core" and "Concept".
He represents Core as:
Any process that contributes directly to sustainable differentiation leading to competitive advantage in target markets.
and Concept as:
All other processes required to fulfill commitments to one or more stakeholders.
He reinstates that as markets mature, offers commoditize. So the Core turns into Concept for the company as they start to search for newer cores. I agree with him on that, in case of Open Source, focusing on concepts is of prime importance and let the user of the Open Systems define their core and work on it, using the functionalities or standards of Open Systems as a base. Open Source work best at platform providers or library implementations for common use and proprietary software are better at addressing core functional requirements.
think Open Source would remain as the standardized enterprise platform
enablers (for applications like servers, browsers, databases) and as
library implementations for a long time. It would not do as well for
end-products. It would continue to provide services, customizations to
proprietary firms for development of the final product for end users.
Sat, 9 Apr 2005 11:07:23 -0400Got this Chris Anderson piece via EMERGIC: Is the bottom of the pyramid the Long Tail? The similarities are notable. Both theories are based on the notion that if you break the economic and physical bottlenecks of distribution you can reach a huge, previously neglected market. They both recognize that millions of small sales can, in aggregate, add up to big profits. And they're both focused on ways to lower the cost of providing goods and services so that you can offer them at lower price point while still maintaining margins. But despite the fact that it took me a trip to India to clear my head on this, I think there is a key difference between them that makes them fundamentally incompatible. The Bottom of the Pyramid (BOP) argument is essentially based on commodification. Take existing goods and services and make them an order of magnitude or two cheaper, either to buy or to make but ideally both. Typically, this means reducing goods to their bare essentials and delivering them on a massive scale. This requires: 1) low price points; 2) minimal marginal costs (reduce consumables and packaging to the bare minimum); 3) "de-skilling" services so non-experts can deliver them; 4) the use of local entrepreneurs. The BOP model is focused on taking a single product or service and finding ways to make it cheap enough to offer to a larger, poorer, market. This is why I think it's essentially about commodification. The Long Tail, on the other hand, is about nicheification. Rather than finding ways to create an even lower lowest common denominator, the Long Tail is about finding economically efficient ways to capitalize on the infinite diversity of taste and demand that has heretofore been overshadowed by mass markets. The millions who find themselves in the tail in some aspect of their life (and that includes all of us) are no poorer than those in the head. Indeed, they are often drawn down the tail by their refined taste, in pursuit of qualities that are not afforded by one-size-fits-all. And they are often willing to pay a premium for those goods and services that suit them better. The Long Tail is, indeed, the very opposite of commodification. So the Long Tail is made up of millions of niches. The Bottom of the Pyramid is made up of mass markets made even more mass. Both lower costs to reach more people, but they do so in different ways for different reasons. They're complimentary forces, but fundamentally different in their approach and aims. The Mystery of the Apparently Similar Theories: Solved. Solved yes, it is. However what one immediate might gather from this is that - As an entrepreneur, one should rather follow the Long Tail's "nicheification" rather than BOP's "commodification" as Long Tail focuses on pure profitability, unlike social welfare centered BOP. But if one follows closely, you will find that both the theories are not actually competing with each other. For developing or under developed mass market economies, Long tail opportunities are quite few... its rather a BOP's commodification haven. Long tail is probably suited for the top class developed economies where it is feasible to quote a premium or equivalent price for goods not in peak demands and people are willing to pay for it to satisfy their unique taste and preferences. While BOP theory necessitates bulk sales from the[...]
Sun, 27 Mar 2005 03:17:56 -0500Corante has an interesting discussion going on speculating about the trends expected in the Digital Media industry. Some key points which I found worth pondering over: More User-centric Approach: The underlying operating paradigm in the music industry has been one of wanting complete and unfettered control (both of the artists, as well as of the fans / 'users'), in fact, of often wanting control more than more revenues! The fact that the music biz continues to try and seize control is very disconcerting and so, at this point, the industry is being dragged kicking and screaming into the digital age, which clearly is about giving control the 'user' aka the customer. They should all take a page from EBay, Amazon, SouthWest Airlines, Tivo and Netflix and empower the customers. They were used to looking at themselves as the ones in charge of their own kingdoms, and therefore by extension were in charge of what their customers can or cannot do. With that type of attitude still lingering on, it is very hard for them to look under the hood and accept that their core business model and operating mode is being rapidly outmoded. Subscription model: People are not going to repeatedly buy a massive amount of music using [the iTunes] model - ask anyone that owns one. On the other side of the equation, a model similar to Netflix (or Napster To Go), with people 'renting' music for a limited period of time rather than owning it, will be a model we'll see more of very soon. The artists and writers will make money by taking a percentage of the fees that are charged for renting access to their music. You establish a monthly subscription, you track what is rented and you play the content owners (including writer, publisher, artist and label) a pro-rata share of the amounts collected, based on actual use. 'Long Tail' effects: The 'unpopular' (or lesser-known) titles earn a disproportionately large share of the total revenue; in fact the aggregate revenues of all lesser-known titles are often larger than those derived from the top-rated and most popular titles; which means that even lesser-known titles stand a good chance to be monetized. Finally, you can make money by selling niche music to niche markets because the hurdles of distribution are removed or at least lowered. In my view, the biggest and most lucrative potential is clearly in niche markets, such as channels that offer very specialized music, such as jazz channels, new age music, folk music, ethnic music, that sort of thing - global niche markets whose total populations will add up very nicely. Complementary businesses: A music rental site could sell merchandise, concert tickets, fans clubs, special event access and other stuff around the core rental business. Mobile platform - an opportunity: One interesting trend is starting to take place in Asia, particularly in China. At the end of 2004, China had 334 million cell phone users (that's close to the total size of the U.S. population!). Therefore, the potential for the cell phone to be the prime distribution pipeline for digital music in China is absolutely mind-boggling because these consumers are very likely to use their cell phones to go onto the Internet (not a computer), in the not too-distant future. Simply put: I think that the mobile music opportunity dwarfs the PC/Internet music opportunity; and: they are converging. [...]
Sun, 20 Mar 2005 08:22:13 -0500
I recently came across some innovative search engines that thought out-of-the-box, trying to grab some piece of the search engine market pie. Firstly, the one which impressed me the most - KartOO. The best thing about this was ofcourse the interface. It linked results like a data warehouse/mining software and shows virtual groupings among the results. Good if you are doing some research on some topics and want to know about particular aspects of certain things. The only problem that I faced while searching through KartOO was that my thinking was too "Googlish". KartOO displays the search text box too, but it additionally gives a map in Flash, about how people think of or associate pages. So the question - when will users learn to adapt to the new innovations? It will surely take some time for the masses to think differently and that's a big plus for the already established Search Engines.
Second in line were "human" classifiers. Like Furl and Topix, which impressed me the most. Furl has a USP which got me instantly hooked onto it. Saving pages for later and bookmarking which I can take along with me i.e. accessible from anywhere. Topix too are good articles intelligently classified into categories relevant and also some local filtering too, if desired. I think the Furl model is the best for initial user appeal. It offers something others don't and for that people start using its other services as visible on their screens ;) For those who question that Furl is a search engine too, should probably think potential and not current status. It's classification is done by humans and therefore "currently" more intelligent than AI operations. Also the classification is done by people who use the service and not managed by people dedicated by the site itself and therefore cost-effective and done in bulk.
Lastly one more effort conducted from a long time is answering questions rather than searching keywords, initiated by AskJeeves, followed by some like Brainboost. This effort I personally feel has FAILED. The results just look like a normal search conducted on keywords. The vastness of questions are left unanswered and probably this may not be the way people will search in the future, even if it quite logical that people think in questions.
Sun, 20 Mar 2005 08:21:29 -0500NYT writes... Consumers are willing to spend millions of dollars on the Web when it comes to music services like iTunes and gaming sites like Xbox Live. But when it comes to online news, they are happy to read it but loath to pay for it. Newspaper Web sites have been so popular that at some newspapers, including The New York Times, the number of people who read the paper online now surpasses the number who buy the print edition. This migration of readers is beginning to transform the newspaper industry. Advertising revenue from online sites is booming and, while it accounts for only 2 percent or 3 percent of most newspapers' overall revenues, it is the fastest-growing source of revenue. And newspaper executives are watching anxiously as the number of online readers grows while the number of print readers declines. "For some publishers, it really sticks in the craw that they are giving away their content for free," said Colby Atwood, vice president of Borrell Associates Inc., a media research firm. The giveaway means less support for expensive news-gathering operations and the potential erosion of advertising revenue from the print side, which is much more profitable. As a result, nearly a decade after newspapers began building and showcasing their Web sites, one of the most vexing questions in newspaper economics endures: should publishers charge for Web news, knowing that they may drive readers away and into the arms of the competition? Of the nation's 1,456 daily newspapers, only one national paper, The Wall Street Journal, which is published by Dow Jones & Company, and about 40 small dailies charge readers to use their Web sites. Other papers charge for either online access to portions of their content or offer online subscribers additional features. "A big part of the motivation for newspapers to charge for their online content is not the revenue it will generate, but the revenue it will save, by slowing the erosion of their print subscriptions," Mr. Atwood said. "We're in the midst of a long and painful transition." Most big papers are watching and waiting as they study the patterns of online readers. Analysts said that the growth in readers was slowing but that readers appeared to be spending more time on the Web sites. "We're always looking at the issue," said Caroline Little, publisher of Washingtonpost.Newsweek Interactive, the online media subsidiary of The Washington Post Company. She said that the online registration process that most papers now require for use of their Web sites, while free, lays the groundwork for charging if papers decide to go that route. "You're getting information from your users and you can target ads to your users, which is more efficient for advertisers," she said. "This has been a dipping of the toe in the water." This has been an observation or rather a question from my side too for a long time. How long can free be sustained? Can Advertisement completely cover the online production, maintenance, distribution costs? Google is surviving on the online text ad revenue model from some time, is that model sustainable? Or is the age of subscription charges for online services like Email, Search, News about to dawn? Only time will tell, but it seems that [...]
Sun, 20 Mar 2005 08:20:55 -0500
I recently found Furl.net from one of John Udell's articles refered by EMERGIC. I instantly got hooked onto it. It is a (currently) free service that helps you organize your bookmarks online and even keep a saved copy of the page on that. It gives out browser plug-ins to quickly "furl" a page and copies all the current contents (incl. Ads) to the saved copy. The reason for this ofcourse is dead links later. The concept is simple, but fabulously executed. It allows to attach keywords, comments and categorize the article. It allows you to publish your bookmarks so that others can refer to it (or not, if you wish so). You can get the "hot" furl additions of the day/week from the site which simply are the ones people are increasingly "furling" for their own. And ofcourse a nice search interface, makes it a complete online bookmarking service with portability (access from any browser on any machine), online storage (well hope they are ready for the bulk loads) and community networking. It seems Furl has hit the jackpot and is definitely more user-friendly than del.icio.us. This fits in nicely in the next generation search engine framework, lets see if the search giants, adapt to this or buy it out. Furl certainly is here for the long term. But in hindsight, I feel there might be some issues creeping upon as some companies/governments might not like the idea of online availability of deleted/removed/banned contents.
Fri, 18 Mar 2005 13:21:00 -0500Chris Anderson has impressed me a lot with his writings on the theory of the Long Tail. I think I will probably be mildly biased to his theory in some of my future writings and thoughts. The theory portrays how the capacity and opportunities of the niche should not be ignored in the "hit-driven" economies of the world. He points out how almost everything can have a potential market and how customers can pay for them if given a "choice" to own them, use them, subscribe for them etc. His writings suggests that he thinks the benefits of targeting the Long Tail by businesses is now even more attractive with the current technology innovations removing the constraints of shelf space, stocking costs and peer references. More from his own type: [It is] an entirely new economic model for the media and entertainment industries, one that is just beginning to show its power. Unlimited selection is revealing truths about what consumers want and how they want to get it in service after service, from DVDs at Netflix to music videos on Yahoo! Launch to songs in the iTunes Music Store and Rhapsody. People are going deep into the catalog, down the long, long list of available titles, far past what's available at Blockbuster Video, Tower Records, and Barnes & Noble. And the more they find, the more they like. As they wander further from the beaten path, they discover their taste is not as mainstream as they thought (or as they had been led to believe by marketing, a lack of alternatives, and a hit-driven culture). Most of us want more than just hits. Everyone's taste departs from the mainstream somewhere, and the more we explore alternatives, the more we're drawn to them. Unfortunately, in recent decades such alternatives have been pushed to the fringes by pumped-up marketing vehicles built to order by industries that desperately need them. Hit-driven economics is a creation of an age without enough room to carry everything for everybody. Not enough shelf space for all the CDs, DVDs, and games produced. Not enough screens to show all the available movies. Not enough channels to broadcast all the TV programs, not enough radio waves to play all the music created, and not enough hours in the day to squeeze everything out through either of those sets of slots. This is the world of scarcity. Now, with online distribution and retail, we are entering a world of abundance. And the differences are profound. With no shelf space to pay for and, in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees, a miss sold is just another sale, with the same margins as a hit. A hit and a miss are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability. The industry has a poor sense of what people want. Indeed, we have a poor sense of what we want. We assume, for instance, that there is little demand for the stuff that isn't carried by Wal-Mart and other major retailers; if people wanted it, surely it would be sold. The rest, the bottom 80 percent, must be subcommercial at best. To get a sense of our true taste, unfiltered by the economics of scarcity[...]
Thu, 10 Mar 2005 11:21:38 -0500
Macromedia had announced sometime back that it had signed up a licensing agreement with Nokia for providing Flash Lite support on Nokia's Series 60 phones. While I was overwhelmed by the news of Flash's entry to mobile phones, been keenly waiting for it for over a year myself, I was slightly dejected by the fact that the adoption might not be for all phone makers. Nokia, no doubt, is the current market leader, but it is diminishing constantly to Samsung, Motorola and the like. I hope Macromedia makes its product available for other phone makers too, else Flash might end up the EDGE way, with not many handsets around, making the technology not a mass hit. I would have rather preferred it going the J2ME way. Oh well, but its a start anyway. But it seems, I was not the only one waiting, there were others too in the queue. Russell Beattie keenly awaited it for 2 years, as he puts it. More from the link:
Flash Lite 1.1 still has some issues (like no file support - you can't save state, and the fact that it's based on older Flash 4.0 tech) and that the developer tools are still oriented towards designers not programmers, but still it's a great announcement. From what I've seen of Flash Lite, the applications developed are smaller, more compelling and quicker to program than their J2ME MIDP counterparts. Flash Lite will add another "middle-layer" programming platform to the Series 60/Symbian OS. Python will be great for hackers and maybe corporate developers, Flash will be great for consumer media-based apps. I can't believe it took this long to happen.
This is a great announcement, I can't wait to start seeing the Flash content flow. First thing, Macromedia and/or Nokia needs to make it a one-click option to grab the mobile player from their sites - the press release insinuated as much, and I hope it happens soon. My other wish is for Macromedia to create an IDE for Flash Lite - I can't deal with freakin' timelines. Give me drag/drop controls and a text editor for the Action Script please!
Completely agree with him on the first point. Flash will be for entertainment WAP sites or downloadable entertainment application. I can sense the Movie and Music industry going all Wappy with this tool and Games to follow. J2ME will be more for business applications, Yes. Unfortunately extensions of business applications has only been restricted to the PalmTops so far. I sense the change as soon as the memory on standard phones increases dramatically.
For the second point, I must admit I never thought in that direction. Me, ofcourse being a Java Software Engineer, I would love Widgets and coding a la Eclipse or Visual Studio style. If Flash is anytime wanting business applications makers to take it seriously, it has to provide this kind of development environment. All in all, Good Luck Macromedia, hoping to see Flash on my mobile soon!
Wed, 9 Mar 2005 11:06:07 -0500
I had already anticipated this move long back, of Yahoo doing good in the Music industry in one of my previous articles and along comes news of Yahoo's plans of launching of a Music Player and Music Store to add to its plethora of services on the net. Yahoo is so well placed right now with its ID mechanism in place and doing wonders. Its Mail, Briefcase, Groups, Messenger, Launch, Geocities, Games, Chat, Mobile, Photos are so well integrated that every user just tends to explore it many a times without reason.
Anyway, back to Music, CNET has more:
Yahoo's full-fledged entry into the digital-music retail business could help shift a market that has remained tilted strongly in Apple's favor. Yahoo has already built a large and loyal following for its streaming-music and video service, and could parlay that into music sales. Indeed, the company's Launchcast radio services was the highest-rated Webcasting service online in January, according to ratings firm Arbitron and ComScore Media Metrix, attracting more than 2.2 million people that month.
However, Apple's dominance has been challenged by other giants, ranging from Sony to Microsoft, without substantially decreasing the iPod maker's market share. Last week, Apple said it had sold more than 300 million songs through its iTunes store since its launch.
"You have to look at how to create a linkage between a device and the online service," GartnerG2 analyst Mike McGuire said. "But given Yahoo's traffic and their very active communities, the potential (for success) is there." Yahoo has begun to streamline its music and multimedia properties over the past few months, changing the name of its Launch site to Yahoo Music and consolidating its entertainment businesses in a Santa Monica, Calif., office near Hollywood. The new MusicNet-powered music service will be integrated into Yahoo's existing infrastructure, possibly including features such as links to its popular instant-messaging program, sources said. MusicNet's technology allows companies to offer subscription services or per-song downloads, and is used by Virgin Digital, America Online and others. Sources close to the company said the new service is likely to launch by the end of the month.
Now things like these make me lament on the fact that I haven't updated this blog for almost 2 months now. Something I hope to be correcting real soon. Well this is a start!
Wed, 12 Jan 2005 11:48:09 -0500The Mobile industry jumped leaps and bounds this year. Camera phones proliferated, the race towards 262k screen colors, getting in MP3s players, video recorders, 3G, Wi-Fi, mergers and ofcourse Windows and Linux make their way inside. From CNET... Hybrid phones make a splash. By late spring, cell phone makers were introducing Wi-Fi phones, bringing new threats and opportunities to wireless carriers and traditional phone service providers. The highly anticipated hybrid phones let people make connections through a local wireless Internet access point, switching over to a cellular network whenever necessary. The result: greater flexibility in mobile communications. Hybrid handsets can use both data and voice applications, with most of the attention focused on data until recently. But that's changing, thanks to technology improvements for managing call transfers between Wi-Fi and cell phone networks and the increasing popularity of VoIP on corporate networks. Early versions of Wi-Fi cell phones failed miserably because of the enormous drain on the batteries--which must support two chipsets rather than one--and because users were forced to manually switch between networks. But at least one phone maker, Motorola, now claims to have solved the automatic transfer problem Expected? Hell Yeah! Hybridism continues and there's nothing stopping it. These phones will not remain the exception but become the norm. However there's still time for their adoption. So probably 2005 will just make the scene more promising for the future. Except for the cost, there even network bandwidth that one can save from this device. However switching calls without disconnection will be the biggest hurdle. It seems some companies are already claiming that. Also this will flare up more alliances (if not mergers and acquisitions) and make the networks more interoperable. 3G? 4G? Wi-Fi? VoIP? It was definitely a year of confusion and that is not yet solved. The hybrid solution perhaps is the best bet everyone can have, but what all will comprise the hybrid? Let's see the progress in some of these technologies: 3G: The promise of wireless broadband has been tantalizing mobile mavens for some time now. Cellular providers such as Verizon Wireless, Sprint PCS, Cingular, AT&T, T-Mobile, and, most recently, Nokia, have been baiting these masses by releasing a spate of products and services that they call 3G. This "third generation" of cellular technology, after previous waves of analog and voice-only digital services, is supposed to combine voice with broadband packet data transmission delivering fast Web surfing, streaming video and audio, multimedia messaging, and other services. But while the radio technologies that American carriers have installed are technically 3G, the services are more akin to dial-up Internet than broadband. By focusing on video services too early operators risk undermining revenue per MB says a new report from telecoms watchers, Analysys. Operators in Japan and South Korea come under attack from Analysys for focusing on sophisticated mul[...]
Sun, 9 Jan 2005 06:30:20 -0500Lately I have been cramped with work, getting less and less time for bloging my thoughts. 2005 new year celebrations went by and all blogs and journals in their last week of the past year as well as the first weeks of the new, critically analyzed and reviewed the events of 2004 and came out with their predictions on "what-to-expect"s in 2005. I too couldn't miss out on this opportunity, now can I ;) Though a tinge late, I will share my thoughts on the events which will possibly shape the things to come. CNET has a great series going on the 2004 year review and I will put my analysis on that. So as for this first part, I will concentrate on the change in Apple's strategy in the year from proprietary and rigid business model to a tinge open, flexible and adaptive model recognizing the change which the industry is undergoing. Well the start has been good enough, lets see what's in store ahead. I will follow CNET's commentary and add my feeds inline. For a change, I will try out a new layout this time :): iPod dwarfs iMac Apple Computer, which rolled the dice three years ago with a hand-size MP3 player the size of a deck of cards, came up boxcars. This year, Apple was largely doubling down on the bet it made in 2001. At Macworld Expo in January, Apple took the iPod and made it a mini. Sales of the iPod rivaled those of the Mac for much of the year, before ultimately dwarfing those of the Mac in the October quarter, at least in number of units sold. Expectations for holiday sales grew into the millions as the iPod topped Christmas wish lists. iTunes - The right vibes? [In the last year,] Apple CEO Steve Jobs announced the company's plans to sell tunes to Windows users. And while the company hasn't magically converted them all to the Mac view of the world, it has made a pretty nice business for itself. Apple has sold tens of millions of songs and more than doubled the number of iPods it is selling. The Mac maker won't say how many of its songs or players are going to Windows users, but it's reasonable to think it's a pretty good chunk, given the relative prevalence of PCs. Apple has clearly established the iTunes Music Store as the standard for legitimate music sales. According to new data from the NPD Group, iTunes retained a 70 percent market share for digital downloads between December 2003 and July 2004, the last month for which data is available. "iTunes has set the standard in online music in terms of sales, usability, and in the quality of its library," said Yankee Group analyst Mike Goodman. "They're the ones who cracked the code, and everyone is following in their footsteps." While the success of iPods and iTunes is overwhelming, everyone is still skeptic on whether this will be the right techique to sell online music or is there any better option available? Well, as for me, with Microsoft following suit in a similar fashion with Windows Media Center, its definately going to be the standard for atleast the next few years. However [...]
Sun, 26 Dec 2004 03:55:25 -0500I came across an excellent article describing the opportunities, challenges, changing focus and sustainability of the outsourcing model currently making hay while the sun shines. The articles describes the emphasis on outsourcing companies on improving their productivity, protecting their markets from wannabes and addressing the biggest question of sustainability of their growth. So without much ado, lets directly analyze the same (my comments are inline): Instead of relying solely on captive centers or third party providers for their outsourcing needs, companies are increasingly turning to hybrid structures, says Ravi Aron, a professor of operations and information management at Wharton. "The debate over one or the other is really fading away, and firms are going toward what's called an 'extended organizational form' which brings together the strengths of the two models. It gives companies a way to say what they want done but also say how they want it done." Essentially, the client firm's managers act as very senior managers of a third-party provider. For instance, New York-based Office Tiger, a BPO solutions provider that has set up operations in Chennai, India, has a system through which companies can make day-to-day changes to processes, adding in verification layers. "I call this 'virtual prowling,'" says Aron. "In most captives, you are able to have a senior manager prowl the floor. So when a third-party provider gives you fine-grain analysis capability, you can still monitor all of these things." Thus, the client firm can see which teams are excelling at which processes - and start picking the composition of new teams based on that knowledge. Well what else can I say but "Hybridism" has made its way here to, and why not? Offer the clients selective services and they feel more secure and dynamic. Being transparent about the process does give one's customer the feeling of being in command and would inculcate greater trust relationship important for the long run. Even firms that swear by captive centers acknowledge that there is scope for more outsourcing. Peter Nag, vice president and head of the global program management office at Lehman Brothers, notes that Wall Street firms often go to captive sites in part because there's a disconnect in domain knowledge between the young managers in India and their older counterparts in the U.S. "We were able to offshore about 20% of our technology within the first year. But we couldn't get beyond that, because we had project managers in their 40s working with people in their 20s. Our projects were complex and proprietary, and we needed a high degree of control. But captive doesn't equal not outsourcing - they both do work and outsource, and it can open the way for more outsourcing once high quality work is proven." I feel domain knowledge transfer is one aspect of outsourcing that scares the West. However it becomes unavoidable in the case one has to reap the benefits of cheaper labour. Domain expertise commands high prices among firms specializing in outsourci[...]
Sat, 25 Dec 2004 08:39:43 -0500
Gone are the days when a single approach is going to be adopted by all, a single technology that will attract most (let alone all), a single proposition being exciting for your clients. Customers now don't want to get fixed onto one standard and want a variety of choices available to them at any time to adopt to their changing needs and times. In the current technology industry where changes are part of the plan, it takes a lot to entice your customers to stick to your product range and offerings and trust you that they wont end up adopting to your changing supplies rather than they adopting to customers' changing demands.
The information technology industry especially has seeds of oligopoly sowed in pretty deep and the final consumers often feel forced into adopting a particular standard/format more cause of lack of options or imposed choices. One trend however which has already started to break this jinx is Hybrids. Hybrids - crossbreeds, in technology terms, can mean a product that can adopt to different technologies, inputs producing logically similar results respectively to those varied inputs. What Hybrids promote is their inherent assistance on the path to ultimate Convergence.
So we have hybrids in many technology industries, even outside IT. We have hybrid cars - fueled by oil or electricity. The very basic advantage of such a type of offering is freedom to choose. You are no longer bound to rising fuel prices affecting your monthly budgets. And once this variety becomes a standard, we can also see support "plug-ins" (as the techies might say ;) ) opening up to you even more choices. Then we have cell phones supporting different different frequencies to help you stick on to the same handset during international travels. We should even have CDMA-GSM hybrid for India. Later on we should be able to add VoIP, 3G onto the same, if required.
In IT, the trend is picking up. Solaris 10's support for native Linux applications is one of the biggest hybrid OS coming up. Cool advantage of your linux apps becoming executable on the Solaris box. IBM's Hybrid database supporting data in native and XML format. This feature surely will push the concept of liquid XML database further. Ultimately a single query could be used irrespective of the vendor of the database tool and only depend on your database design. Even iPOD's multi-format support, does allow one to choose between the proprietary formats or the general MP3 standards. Going ahead convergence in the form of Hybrids will probably be seen in the Television arena (Bittorrent, TiVo, Video-on-demand), gaming devices (one device - any vendor format - XBox, PS2, PSP, GameCube, N-Gage, PC), Web Search (text, videos, audios, shopping), computing (take away a part of your desktop as a smaller pocket PC probably :) ) amongst others.
Sun, 19 Dec 2004 05:17:02 -0500
J. Scott Edwards has presented a nice technique of optimizing data files required by applications without compromising on inter-operability. I think it applies well to applications too, in addition to OS itself, on which he stresses:
My idea is to have all of the information stored on the disk in the native Object format of the program. That way instead of having to constantly convert and interpret data, the application can just access that object directly. And when that data is needed in a flat file format, you have a converter App (object) that can access the internal data and convert it to a flat file type of format. For example, let's say you have some compressed (with Ogg Vorbis or whatever) audio objects on your computer. And you want to burn an audio CD which can be played in a normal audio CD player. You would create a playlist object and connect the output (more on this later) to the input of a Ogg Vorbis converter object and then into the Audio CD burning object.
Though fairly basic, I haven't see many applications using it so far.
Sat, 18 Dec 2004 13:48:29 -0500
A couple of days back, I caught up on a nice little show on the indian news channel - Headlines Today, named "Top 5 today" which featured the renowned name in the media - Vir Sanghavi as the host and he were discussing on the changing face of indian and international journalism with reference to the intimate photographs of actors splashed on a mainstream newspaper. A few interesting thoughts he put forward:
Journalism today is more about people than issues. This is a global trend not just with reference to India.
If the photographs were that of a political figure or an industrialist, the media could have been accused of "unethical" journalism. But since it was concerning film actors, its all legal. Actors themselves provide entry to the press to their private parties and want their personal life details to be published in the media. They want to be talked about.
Journalism unfortunately is also a business and businesses have to earn profit. There are some people who practice very wise "ethical" journalism, so to say. Again unfortunately one of them goes bankrupt each year!
Sat, 18 Dec 2004 13:48:12 -0500
Innovation isn't what innovators do; it's what customers, clients, and people adopt. Innovation isn't about crafting brilliant ideas that change minds; it's about the distribution of usable artifacts that change behavior. Innovators-their optimistic arrogance notwithstanding-don't change the world; the users of their innovations do. That's not a subtle distinction.
Sat, 4 Dec 2004 07:23:38 -0500Almost as a continuation of the earlier article, here's how Sun is in such a better position than IBM. It already has its own OS - Solaris, (which I must point out till now is a bit subdued) and that already has the power to exploit the power of its hardware offerings. It is launching its Niagara processor by 2006: The Niagara chip has eight processing engines, or cores, each capable of running four simultaneous instruction sequences, or threads. Though it lacks circuitry to maximize the speed with which a given thread will run, Sun expects the chip to be useful for replacing large numbers of lower-end servers. Niagara is a crucial part of Sun's attempt to keep the Sparc family of processors relevant in the face of widely used x86 chips from Intel and Advanced Micro Devices and increasingly powerful Power processors from IBM. Niagara was spawned at start-up Afara Websystems, which Sun acquired in 2002. Each processor core on the chip juggles four threads, switching from one to another when one is held up by slow communications with the computer's main memory. Sun is touting the processor as a solution to power consumption woes in corporate data centers. Each Niagara processor consumes 56 watts. By contrast, it's not unusual for a high-end server chip to use between 80 watts and 120 watts. Also, an older CNET article points out that they are banking on how large central servers will really be the key area for chip manufacturers as thin clients compromise on computing power a bit. I would endorse the view. Sun's throughput computing plan is designed to vastly increase the power of servers and thus to reclaim momentum Sun has lost to Intel. The technique, which won't result in chips larger than those from competitors, sacrifices the ability to perform one task extremely quickly for the ability to do multiple independent tasks simultaneously. Sun has changed dramatically in the last year, dropping its argument that its Solaris operating system and UltraSparc processors are sufficient for all computing needs and letting the Linux operating system and Intel processors into its product line. But essentially the company is sticking by one of its mainstay principles: Leave the computing work to large central servers, not to desktop machines. In McNealy's vision, rather than each person having his or her own desktop computer, many people will share centralized servers. They'll carry not laptops but tokens that will grant them access to their private computing resources. "The shared resource model blows the doors off" the dedicated model, McNealy said. As an alternative to PCs, Sun has loudly trumpeted its Sun Ray system, which does no processing on its own but instead relies on a central server. Sun is working on a future version called WAN Ray that can use wide-area network technology such as DSL lines or cable modems to connect to the server, McNealy said. Ultima[...]
Sat, 4 Dec 2004 07:22:24 -0500Jonathan Schwartz had quoted this in a very interesting article describing IBM's failure to push its Chip segment due to lack of a good software strategy for it. This particular thought is pretty interesting when the Chip industry is at this juncture and Chip manufacturers are trying to either define their own segment, or adapting to a successful segment, or trying to beat the leader at his own game. Intel, AMD, Sun, IBM and many others in the race. Jonathan points out to an earlier article by him describing how IBM found itself quite a laggard in the race. I will snip out some key points... IBM CEO, John Akers. Akers and his staff had the wisdom to enter the PC market in its early days, but the short sightedness to suggest customers source their PC operating system from a little company in the Pacific northwest. The company turned into Microsoft, and they continue to generously return the fruits of their coup to their stockholders. A few years back, IBM and HP both hopped onto the social movement called linux. It's a wonderful movement. But the bad news for IBM is that the vast majority of enterprise datacenter deployments are now occurring on Red Hat's linux. And with Red Hat increasing price, while adding in an application server that competes with WebSphere, IBM's finding itself in the uncomfortable position of having lost control of the social movement they were hoping to monetize. They're beginning to look like the IBM of Mr. Akers's era - having missed the forest for a tree, and finding themselves without an operating system. And with most enterprises having picked Red Hat on IBM's recommendation, IBM now clumsily realizes it's invited the fox into the hen house. With Red Hat running on the majority of IBM's proprietary hardware, Red Hat can now direct those customers to HP and Dell. Even Sun. Now if you're an IBM customer, you've probably received (or should prepare to receive) the pitch from IBM incenting you to move off Red Hat to SuSe. Bringing in SuSe at the last minute isn't having nearly the effect IBM desires - at least from the customers, developers (and press) I speak with. Moving from Red Hat Enterprise Server to SuSe's Enterprise Linux is very complicated (eg, which application server do you pick?), and with IBM's consulting bill, very expensive. IBM is in a real pickle. Red Hat's dominance leaves IBM almost entirely dependent upon SuSe/Novell. Whoever owns Novell controls the OS on which IBM's future depends. Now that's an interesting thought, isn't it? I'd keep a close eye on the Novell/SuSe conversation. If IBM acquires them, the community outrage and customer disaffection is going to be epic... but where else does IBM go? And the next quotes are really a belting (featuring in the formal link)... I'm watching with amusement as IBM prepares to stub its toe with their new, curiously named &qu[...]
Mon, 29 Nov 2004 12:44:03 -0500SOAP has proved that inter-operability can be achieved with standardization. Neither CORBA nor DCOM could achieve it, both of them didn't quite have the bulk support as did SOAP. And the raw bases of SOAP are XML and Industry wide acceptance. Now security has gathered much pace these past few years, escalated by flaws found in MS software mostly and also the bigger worms, spam bots doing the rounds. Security is probably the least standardized area of IT maybe. We all have heard of firewalls and IDSs(Intrusion Detection Systems), but there has never been a dedicated protocol, language or framework for it. Every vendor simply defines security in his own way and the clients have to adapt to it. OASIS(Organization for the Advancement of Structured Information Standards) has come out with a new security interoperability standard AVDL (Application Vulnerability Description Language). Well this new standard seems to have atleast 2 of benefits of SOAP - XML data and broader industry acceptance. More from Net-Security and related... The Application Vulnerability Description Language (AVDL) is a rather new security interoperability standard within the Organization for the Advancement of Structured Information Standards (OASIS) that was first proposed in April 2003 by several leaders within the application security space. AVDL creates a uniform way of describing application security vulnerabilities using XML. With dozens of security patches and application level vulnerabilities released each week, enterprises must deal with a constant flood of new security patches from their application and infrastructure vendors. To make matters worse, network level security products do little to protect against these vulnerabilities at the application level. To address this problem, enterprises today have deployed a host of best-of-breed security products to discover application vulnerabilities, block application-layer attacks, repair vulnerable web sites, distribute patches and manage security events. Enterprises view application security as a continuous lifecycle. Unfortunately, there is currently no standard way for these products to communicate with each other, making the overall security management process far too linear, manual and time-consuming. Enterprise customers are asking companies to provide products that interoperate. A consistent way to describe application security vulnerabilities via XML is a significant step towards that goal. Today, these vendors proposing AVDL are actively engaged in projects whereby XML-based vulnerability descriptions will be used to improve the responsiveness and effectiveness of attack prevention, event correlation, and remediation technologies. XML establishes a common framework, but XML alone does not ensure vendor interoperability. AVDL Benefits Throughout the Application Lifecycle: Developers and Quality Assurance During the application developme[...]
Mon, 29 Nov 2004 12:41:30 -0500
As if the earlier ones were not bad enough, we have another round of broad patents, this time around on the very basic Web Services concept, going on "Auction". I mean what could be worse than allowing the general public a chance to take a risk of screwing the established organizations. It's really silly that companies are coming together to buy the patents themselves (they will retire it, as mentioned on CNET). This news would surely send the bids much higher and the chances of rigging would just exaggerate now. Craig Smith, the founder of CommerceNet, the company proposing to buy and retire the patents with the help of fundings from other established companies has very humorously put it, "It's a little bit like paying the blackmailer before they have something to blackmail you about."
Sat, 20 Nov 2004 14:35:46 -0500Solaris is one of the oldest Operating Systems around, but not many talk about it or take it seriously (thanks to the media). Windows and Linux, Linux and Windows, RedHat and Microsoft and SUSE - thats what we all keep hearing about. Solaris is always treated as an also-run along with the likes of HP-UX, IBM's AIX etc. It really wasn't clear whether Sun was phasing it out from its product line or preparing for something big. Perhaps something as big as Solaris 10. Solaris 10 is great on paper. The new features are quite tempting for me atleast. It seems like a big last effort from the software major to finally make it the 3rd OS of the media world. 1st and the 2nd being MS Windows and RedHat/SUSE Linux ofcourse. I tend to take their names together many times simply cause I think their future lies in them working close to each other and developing a standards based Linux. If they drift apart they are more likely to end up 3rd and 4th, with someone taking over the 2nd position; someone like Solaris. What I like most about Sun Microsystem's approach to the development of Java was its controlled development. Even though RedHat and IBM kept pushing Sun to make Java open source and extensible, it would really have it better if there was a centralized controlled development of the platform. And it is what Sun chose, and it is what made it special and made Java all work right. If there were different flavors of Java available today as is the case with Linux, J2EE would be a very distant 2nd to MS's .NET platform. But as luck would have it, the 1st and 2nd position in the development platform race is really debatable and infact J2EE commands a better position today, according to me. So back to Solaris 10, Sun Microsystem will make it Open Source, but in Java style, the development process will continue to be controlled by Sun Microsystem itself. And the story doesn't stop there. Along with making it Open source under a specific license, Sun Microsystem will also launch a patent protection plan as CNET reports: When Sun Microsystems releases Solaris as open-source software, it plans to provide legal protection from patent-infringement suits to outsiders using or developing the operating system--one of several ways Sun hopes to make Solaris more competitive with Linux. "You should have a company that can protect you and take that $92 million bullet," Scott McNealy said. Sun also has an arsenal of patents it can use as the basis for countersuits against computing companies, he said, adding that "most people with network-computing intellectual property probably don't want to come after us, because we might go right after them." But open-source developers using Solaris technology need not fear that Sun's patent arsenal will be used against them, Sun President Jonathan Schwartz said. "It is not our intent to say, 'Here is our intellec[...]
Sat, 13 Nov 2004 10:37:18 -0500
Yahoo and USAToday confirm that Microsoft is outselling PalmSource in handheld software. Now, this really comes to the quality of the software, its user-friendliness, better compatibility/inter-operability with your home PC ;), application variety and ofcourse marketing. If Linux, Sun, Palm now cry over it, it will really be a pity. I think some of MS's rivals are just not pushing hard enough. There is Apple who raced with his iPOD and Google who just exploited on the lesser looked into Search technology. Good luck to them in maintaining their leads. But there are so many markets where MS is just wiping out the others, I would certainly give it a lot of credit on its achievements. More from those articles...
Microsoft overtook PalmSource in the third quarter as the world's biggest operating system for handheld computers, a survey showed on Friday. Microsoft's Windows operating system accounted for 48.1% of worldwide shipments of personal digital assistants (PDAs), up from 41.2% in year-ago period, according to July-September statistics from research group Gartner.
Palm's share dropped to 29.8% in the third quarter of 2004 from 46.9% in the same period last year. Canada's Research In Motion, which produces the hardware and software for its popular BlackBerry wireless e-mail devices, was a strong third, quadrupling its global market share in twelve months to 19.8% from 4.9%.
Worldwide shipments of personal digital assistants (PDAs) increased 13.6% to 2.86 million units. Linux remained a distant fourth and lost market share as it is running on only 0.9% of handheld computers, down from 1.9% a year ago.
The handheld computer market is competing with the faster growing smartphone market, which is expected to double to 20 million units this year. Symbian provides the dominant software in that market segment of advanced mobile phones which can run computer-like applications like navigation software and email.
Symbian looks weak to me, cause its got the same problems as Palm. Windows is already in the smart phone market. Linux is as usual a laggard. One more run for Microsoft soon? Let's wait and watch.
Sat, 13 Nov 2004 10:36:50 -0500Perhaps one of Sun's biggest bet was always the emergence of multiple operating system. Java - a platform independent language (or even a platform) was always banked on multiple OSs working together towards a combined information delivery model. Perhaps the reason that it was the choice of platform on NASA's Mars Rover proved that consistently. LinuxWorld has an interview with John Loiacono, executive vice president of Sun Microsystems over Sun's Linux stratergy and a bit more... There are two different questions that you have asked, maybe three. What is Sun's viewpoint on open source? What is Sun's viewpoint on Linux? What is Sun's viewpoint on Red Hat? Sun was founded on the principle of open source. We have contributed more lines of open source code than any other entity on the planet except for Cal Berkeley. NetBeans, Sun Grid Engine, OpenOffice, and Solaris are all technologies that use the open source process, and we will continue to do so. We'll remain a heavy contributor on the open source front, and it will remain a key component of how we develop software. People don't realize today that a huge portion of Solaris is open source. For example, today we use GNOME as our desktop environment. We use Mozilla. We have integrated Apache. We have SAMBA. All of these pieces of software are a part of Solaris today. Some people think that open source is new to Sun and that we don't get it. We are a pioneer. Sun, I think, hasn't been very successful with its own product range - NetBeans, Sun Grid Engine, OpenOffice, and Solaris. However it is still banking big on Solaris. Infact the moment he seperates out the internals of Solaris into GNOME, Mozilla, Apache, SAMBA etc. we see all success stories. We firmly believe that Linux (server and desktop) is an x86/AMD phenomenon. We believe that this will continue. Understanding that it does run on other architectures, that 99% of the volume generated in the Linux space is on x86. We think that Linux will continue to be a big player, including on the desktop where people are concerned about cost and want an alternative to Windows. Linux is something that we'll have to interoperate with because it may exist far beyond whatever Solaris turns out to be. We are in favor of Linux. We think that the Linux movement is great and that the open source process is great. We are leveraging open source in our software stack where it makes sense. Perhaps the most key takeaway from the whole interview - x86/AMD phenomenon. Linux is so much suited for other platforms, even embedded ones but what it is known for is the x86 market. This I think will have to change sooner and Linux has to be thought more in terms of a machine operating systems (read mobiles, palms, television, cars, machinery, embedded, embedded) rather than just a desktop system. However, we[...]
Sun, 31 Oct 2004 04:52:14 -0500Will of the TVHarmony weblog has posted some excellent thoughts of On-demand Television, on-demand video or whatever you might want to call it (he calls it Addressable Television). I submit to his views completely as I will bring out the highlights of his blog: I like the term "addressable television" to describe the ability to get television content in a similiar fashion as getting web content. The area of disagreement is which technology is going to "win". Here are the contenders as the group saw it: * Video on Demand (VOD) * TV delivered via phone lines (IPTV) * Video on the Internet (Streaming) * Downloadable Internet (BitTorrent) Many of the crowd there found the BitTorrent model compelling, citing the history of the music industry and napster as likely to be repeated for video. I tend to agree that to a certain extend, this is already happening, with people avoiding copyright law and putting up content on the web, and the roadblocks from moving video streams from a DVR to the internet are quickly eroding. Here's the basic point: I think the advent of the media-centric PC will cause this trend to accelerate. If my family room is driven by a PC with a DVR, set top box, and web browser built into it, connected to cable for both programming and high speed data, and then connected to a nice big flat panel display, the option to watch a show via live TV, VOD, DVR, or Bit Torrent is just a click of the remote. While I agree that is a compelling, I think there are hurdles to make this vision work in the long term. I think they will be overcome, but for a large percentage of the population, VOD, especially if it expands to becoming a centralized DVR, is likely going to be the easier solution. First, I think the battle will ultimately be played out on HDTV. The cost of HDTV is getting lower each day, and more and more people are buying HDTV-ready sets. More and more content is being delivered in HDTV format, and it won't take too much time before people demand HDTV streams as a viewing preference. Second, I view video's relationship with people different than the relationship people have with music. People listen to music over and over again, but in general, video is a single use commodity for the most part (I have a 3 year old daughter so I can tell you there are exceptions to that rule). This changes the calculus slightly in that the pain to download a video has to be less than the pain to download a music track, or it doesn't seem worth it. Third, there is a shelf life issue. Music is fairly easy to store and since people listen to it over and over again, it has a long shelf life on a networked computer. A lot of video content has an expiration date and while[...]
Sun, 31 Oct 2004 04:51:25 -0500I consider it a good habit to listen to some of the influential people in the software industry, their thoughts on the future progress of the industry. Not only many of these guys' views have an impact on the industry as a whole, but their knowledge and experience make them think before they speak and they many a times are in a position to analyze things better than the media or any other developer. I would have to add that its not always the case too. I have a select set of people who's thoughts are quite in sync with mine on future trends of computing and software in general, chief among them - Bill Gates and Scott McNealy. So, in this particular annual gathering of tech professionals at the Gartner Symposium and Information Technology Expo, I caught onto Yahoo's summarized article on some of the chief issues discussed. My comments are inline. John Chambers (CEO of Cisco Systems) Q: Cisco has opened a research and development center in China and launched a venture fund in India. Do you expect most of your growth to be outside the USA? A: The majority of our job growth will be in America. In China, we're adding 95 jobs in one to two years. That's normal growth. We're a little different (than many other tech companies) in that we want to keep the majority of our jobs here. Q: Why? Wouldn't sending jobs outside the USA save costs? A: It's good business to try to do right by your employees. We try to treat our people like we would like to be treated. We want balanced growth globally. We're very open with our employees about that. Unlike software firms, their hardware counterparts are less likely to catch on to the outsourcing burst I feel. This is because while you just need connectivity to be able to develop, maintain, test software offshore, its not that easy for hardware. Especially in case of India, the hardware leaps are just not progressing at the same rate as software. The infrastructure setup is partly to be blamed, but overall the industry has not picked up much due to shortage of local demand. Q: Cisco has made a number of announcements related to security, including partnerships with Microsoft and IBM. So far, security has largely been relegated to software companies, and some analysts say hardware makers need to do more. What role should hardware makers play? A: There should be relatively open standards so that a consortium of companies (can work together). We work with our software application partners and even our competitors. We can get along with IBM and Microsoft and Sun (Microsystems). The industry as a whole has to work on it. It's our biggest opportunity and our biggest challenge. Somehow many of the industry problem always ends up at this stage - lack of standardization. It's a[...]
Sun, 31 Oct 2004 04:50:52 -0500
In an excerpt from an email interview with Linus Torvald, I came across a good little advice from him for start-ups and I think it really is quite logical, though a little uncommon,
Nobody should start to undertake a large project. You start with a small _trivial_ project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision.
So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way _useful_ first, and then others will say "hey, that _almost_ works for me", and they'll get involved in the project.
And if there is anything I've learnt from Linux, it's that projects have a life of their own, and you should _not_ try to enforce your "vision" too strongly on them. Most often you're wrong anyway, and if you're not flexible and willing to take input from others (and willing to change direction when it turned out your vision was flawed), you'll never get anything good done.
In other words, be willing to admit your mistakes, and don't expect to get anywhere big in any kind of short timeframe. I've been doing Linux for thirteen years, and I expect to do it for quite some time still. If I had _expected_ to do something that big, I'd never have started. It started out small and insignificant, and that's how I thought about it.
Sat, 23 Oct 2004 05:27:26 -0400This is a pretty old article, but pretty interesting none the less. The lessons to be learnt from it is still valid in the present scenario. Joel Spolsky notes down some of his thoughts on how Microsoft eventually lost his stronghold developer support on Win32 API. I think it was always eventually on cards, but definitely some of the decisions of MS, lead to a speedier defeat. My comments are inline. Microsoft's crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft's monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft's income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet. Remember the definition of an operating system? It's the thing that manages a computer's resources so that application programs can run. People don't really care much about operating systems; they care about those application programs that the operating system makes possible. Word Processors. Instant Messaging. Email. Accounts Payable. Web sites with pictures of Paris Hilton. By itself, an operating system is not that useful. People buy operating systems because of the useful applications that run on it. And therefore the most useful operating system is the one that has the most useful applications. The logical conclusion of this is that if you're trying to sell operating systems, the most important thing to do is make software developers want to develop software for your operating system I quite agree to this. Infact my thoughts are quite in sync with Joel. It's the reason MS' Windows is so hard to replace, even with a better OS, let alone a not so user-friendly one. Infact, the open sourcing of Linux and IBM's Eclipse was the only possible move to attract developers to adapt to the new thing, which otherwise would end up in the same state as Unix today. Why Apple and Sun Can't Sell Computers? Because Apple and Sun computers don't run Windows programs, or, if they do, it's in some kind of expensive emulation mode that doesn't work so great. Remember, people buy computers for the applications that they run, and there's so much more great desktop software available for Windows than Mac that it's very hard to be a Mac user. And that's why the Windows API is such an important asset to Microsoft. Quite in sync one could look at the mobile market. It was dominated by Symbian, but the lack of applications (plug n play, if you may) on that platform, I think didn't make it a must-have for your mobile. Java hit the market [...]
Sat, 23 Oct 2004 05:26:49 -0400I guess this articles are only for people like me who are newbies to the Nanotechnology world. I came across a BBC News' article about the discovery of a new nanofabric called Graphene. More from the same... Called graphene, it is a two-dimensional, giant, flat molecule which is still only the thickness of an atom. The nanofabric's remarkable electronic properties mean that an ultra-fast and stable transistor could be made. Scientists have been trying to exploit this for computing because smaller transistors mean the distances electrons have to travel become shorter, meaning faster speeds. Conventional transistors rely on the semi-conducting characteristics of silicon which provide the switches that change the flow of current in computers and other electronics. "All the recent progress has been on nanotubes for transistors. These are sheets of graphite molecules wrapped in a cylinder - like a chocolate cylinder you stick in your ice cream," explained Professor Laurence Eaves. "Although these are interesting, because they are one-dimensional, they have limitations. Graphene is a plane transistor - flat sheets.". Professor Andre Geim, who leads the research team, explained that the material they have discovered could be thought of as millions of unrolled carbon nanotubes which have been stuck together to make an infinitely large sheet, an atom thick. They showed that electrons could travel sub-micron distances without being scattered, which means fast-switching transistors. He added: "People have been trying to make transistors faster and smaller. There is a Holy Grail of electronics that engineers call ballistic transistors - ultimately faster than anything.". A ballistic transistor is where electrons can shoot through without collisions, like a bullet. In other words, they have what is called a long mean free path - the distance a molecule travels without colliding into another. Greater distances with nothing to collide with means faster speeds. Fewer collisions means less energy is lost or given off too. Although they have not demonstrated a ballistic transistor yet, their experiments have shown that the material could, in theory, produce one. I also ventured further into the quest for a little more knowledge on this emerging new field and can across a beautiful presentation from the same site. Some excerpts from the same presentation explaining the gist of Nanotechnology and the diverse uses of it: Nanotechnology concerns materials and working devices that are engineered at the scale of atoms and molecules. Advances in nanotech will impact electronics and computing, medicine, cosmetics,[...]