Share This

Wednesday, March 31, 2010

What is new in Malaysia’s New Economic Model?

Prime Minister Najib has announced the broad outline of the proposed New Economic Model (NEM) at the Invest Malaysia conference.

 
Malaysia's New Economic Model proposes a number of strategic reforms.

The objective of the NEM is for Malaysia to join the ranks of the high-income economies, but not at all costs. The growth process needs to be both inclusive and sustainable. Inclusive growth enables the benefits to be broadly shared across all communities. Sustainable growth augments the wealth of current generations in a way that does not come at the expense of future generations.

A number of strategic reform initiatives have been proposed. These are aimed at greater private initiative, better skills, more competition, a leaner public sector, pro-growth affirmative action, a better knowledge base and infrastructure, the selective promotion of sectors, and environmental as well as fiscal sustainability.

The next step of the process will be a public consultation to gather feedback on the key principles and afterwards the key recommendations will be translated into actionable policies.
The NEM represents a shift of emphasis in several dimensions:
  • Refocusing from quantity to quality-driven growth. Mere accumulation of capital and labor quantities is insufficient for sustained long-term growth. To boost productivity, Malaysia needs to refocus on quality investment in physical and human capital.
     
  • Relying more on private sector initiative. This involves rolling back the government’s presence in some areas, promoting competition and exposing all commercial activities (including that of GLCs) to the same rules of the game.
     
  • Making decisions bottom-up rather than top-down. Bottom-up approaches involve decentralized and participative processes that rest on local autonomy and accountability —often a source of healthy competition at the subnational level, as China’s case illustrates.
     
  • Allowing for unbalanced regional growth. Growth accelerates if economic activity is geographically concentrated rather than spread out. Malaysia needs to promote clustered growth, but also ensure good connectivity between where people live and work.
     
  • Providing selective, smart incentives. Transformation of industrial policies into smart innovation and technology policies will enable Malaysia to concentrate scarce public resources on activities that are most likely to catalyze value.
     
  • Reorienting horizons towards emerging markets. Malaysia can take advantage of emerging market growth by leveraging on its diverse workforce and by strengthening linkages with Asia and the Middle East.
     
  • Welcoming foreign talent including the diaspora. As Malaysia improves the pool of talent domestically, foreign skilled labor can fill the gap in the meantime. Foreign talent does not substract from local opportunities--on the contrary, it generates positive spill-over effects to the benefit of everyone.
 Overall, the New Economic Model demonstrates the clear recognition that Malaysia needs to introduce deep-reaching structural reforms to boost growth. The proposed measures represent a significant and welcome step in this direction. What will matter most now is the translation of proposed principles into actionable policies and the strong and multi-year commitment to implement them.

Source: http://blogs.worldbank.org/eastasiapacific/node/2887
 ------------------------------------------------------------------------------- 
Malaysia' ‘New Economic Model’

KUALA LUMPUR, March 30 — Malaysian Prime Minister Datuk Seri Najib Razak today unveiled a raft of economic measures that he said would propel this Southeast Asian country to developed nation status by 2020.
Following are some of the highlights of what he announced:
•    State investor Khazanah to sell 32 percent stake in Pos Malaysia.
•    To list stakes in two Petronas units.
•    Facilitate foreign direct and domestic direct investments in emerging industries/sectors.
•    Remove distortions in regulation and licensing, including replacement of Approved Permit system with a negative list of imports.
•    Reduce direct state participation in the economy.
•    Divest GLCs in industries where the private sector is operating effectively.
•    Strengthen the competitive environment by introducing fair trade legislation.
•    Set up an Equal Opportunity Commission to cover discriminatory and unfair practices.
•    Review remaining entry restrictions in products and services sectors.
•    Phase out price controls and subsidies that distort markets for goods and services
•    Apply government savings to a wider social safety net for the bottom 40 per cent of households, prior to subsidy removal.
•    Have zero tolerance for corruption
•    Create a transformation fund to assist distressed firms during the refom period.
•    Easing entry and exit of firms as well as high skilled workers.
•    Simplify bankruptcy laws pertaining to companies and individuals to promoteo vibrant entrepreneurship.
•    Improve access to specialised skills.
•    Use appropriate pricing, regulatory and strategic policies to manage non-renewable resources sustainably.
•    Develop a comprehensive energy policy.
•    Develop banking capacity to assess credit approvals for green investment using non-collateral based criteria.
•    Liberalise entry of foreign experts specialising in financial analysis of viability of green technology projects.
•    Reduce wastage and avoid cost overrun by better controlling expenditure.
•    Establish open, efficient and transparent government procurement process.
•    Adopt international best practices on fiscal transparency. — Reuters

Source: http://www.themalaysianinsider.com/index.php/business/58004-malaysias-new-economic-model
 -------------------------------------------------------------------------------------------------------
Related articles:




Malaysia must act now to retain competitiveness



Malaysia Star - 10 hours ago
The NEM report said that half a century after independence, these figures provided a sobering reminder of how far Malaysia still had to go before it could ...
CORRECTED-BREAKINGVIEWS-Malaysia needs to get out of its economy's way‎ - Interactive Investor
The leap that Malaysia must make‎ - Business Times (subscription)
Income inequality remains difficult to overcome‎ - Malaysia Star
all 5 news articles »





Apple Sued Over iPad Patent Infringement

By Dan Hope, TechNewsDaily Staff Writer 

With the public release of the Apple iPad looming, Elan Microelectronics, a Taiwanese chipmaker, is suing Apple, claiming many Apple products infringe on its multitouch patents. 

Elan has asked the International Trade Commission (ITC) to ban imports of the iPhone, iPod Touch, MacBook, Magic Mouse and even the yet-to-be-released iPad.

"We have taken the step of filing the ITC complaint as a continuation of our efforts to enforce our patent rights against Apple's ongoing infringement. A proceeding in the ITC offers a quick and effective way for Elan to enforce its patent," the company said in a statement.

Elan says it owns patents covering "touch-sensitive input devices with the ability to detect the simultaneous presence of two or more fingers," which is exactly what these Apple products do. Apple has not released a formal response to the lawsuit yet.

This isn't the first time Elan has sued over its mutltitouch patent. Two years ago it sued Synaptics in a similar case. Synaptics ended up entering a licensing deal with Elan, but it's not a foregone conclusion that Apple will do the same thing since Apple is no stranger to prolonged legal battles.

There is also an element of irony in Apple being sued for multitouch patent infringement because the company recently brought a similar suit against smartphone maker HTC. Apple said HTC phones with the Android operating system infringed on over 20 Apple patents, including some that had to do with multitouch interfaces.
The lawsuit won't affect sales of pre-ordered iPads slated to go on sale this Saturday, many of which have already shipped.

Source:  http://newscri.be/link/1058559


Better media links help China, India


BEIJING - Strengthened media cooperation between India and China will help improve understanding and promote more beneficial bilateral ties between the two countries, officials from both sides proposed on Tuesday. 

"China and India are enjoying a relationship which is deepening and broadening," S. Jaishankar, the Indian ambassador to China, said at the 2010 India-China Development Forum in Beijing. Jaishankar noted in his speech that both nations had witnessed some controversial and negative media coverage about each other last year, but said it was "no use blaming each other". 

Jaishankar proposed a shift in China's focus from various media debates in India to the evaluation of the result brought about by these voices. 


"Our media coverage will be more positive if we promote our relationship, and of course, a more efficient interpretation and dialogue is needed for such progress." Wang Chen, minister of the State Council Information Office, also noted the importance of the media, as direct communication between the two peoples was limited. 

"China and India together account for almost half of the world's populationmore intensified media coverage by both countries about our progress and efforts is much needed," he said. Wang proposed that both countries report in a more positive and all-round manner, as well as cover mutual achievements. 

"We hope the media will become the window of understanding for both sides," Wang said. 

"Although both Chinese and Indian media have made great strides in recent years, the Western media still had the upper hand. China and India get to know each other through Western media outlets such as CNN and BBC, which somehow lead to misunderstanding. The media cooperation should be enhanced between the two countries." A media cooperation committee was also proposed during the forum. 

Zeng Jianhua, executive director of the Department of Asian, African and Latin American Affairs at the Chinese People's Institute of Foreign Affairs said such a panel would help China and India put aside differences due to their different political and cultural backgrounds, and seek a common ground for mutual development.

By Hu Haiyan and Ai Yang (China Daily)  Updated: 2010-03-31 07:48


Source: http://newscri.be/link/1058551



Greenpeace: Cloud Computing Greenhouse Gas Emissions to Triple

BY Ariel Schwartz

Make IT Green

As cloud computing-fueled devices like the iPad grow in popularity, so will associated greenhouse gas emissions, according to Greenpeace's "Make IT Green" report. The report, which dubs 2010 the Year of the Cloud, offers up a disturbing statistic: Cloud computing greenhouse gas emissions will triple by 2020.

The increase in emissions makes sense. As we increasingly rely on the cloud to store our movies, music, and documents, cloud providers will continue to build more data centers--many of which are powered by coal. Facebook, for example, recently announced that is building a data center in Oregon that will be powered mostly by coal-fired power stations, much to the chagrin of groups like Greenpeace.

The solution to the cloud computing problem is fairly obvious. Greenpeace explains in its report, "Companies like Facebook, Google, and other large players in the cloud computing market must advocate for policy change at the local, national, and international levels to ensure that, as their appetite for energy increases, so does the supply of renewable energy." As we've noted before, companies like IBM, Google, and HP have already begun to make strides in cutting data center energy use. But there is still plenty of work to be done--as it stands, the cloud will use 1,963.74 billion kilowatt hours of electricity by 2020.

Source: http://newscri.be/link/1058493

Intel (finally) uncages Nehalem-EX beast

Like Itanium. But you might actually use it

Intel's switch to the Nehalem architecture was finally completed Tuesday with the launch of the Nehalem-EX Xeon 6500 and 7500 processors, the last of the Core, Xeon, and Itanium chips to get the Quick Path Interconnect and a slew of features that make Intel chips compete head-to-head with alternatives from Advanced Micro Devices. The price war at the midrange and high-end of the x64 market can now get underway, while the all-out, total price war awaits the debut of AMD's Opteron 6100 processors in the second quarter.

Since the summer of 2008. Intel has been previewing its top-end, eight-core Nehalem-EX beast, which we now know as the Xeon X7560. As it has done with prior generations of Xeons, the Nehalem-EX line is not comprised of one or two chips, but a mix of chips with different features (clock speed, cache memory, HyperThreading, and Turbo Boost) dialed up and down to give customers chips tuned for specific workloads.

While last year's Nehalem-EP Xeon 5500 and this year's Westmere-EP Xeon 5600 processors are aimed at workstations or servers with two sockets, with the Nehalem-EX lineup, Intel has broadened the definition of its Expandable Server (this is apparently what EX is short for, with EP is supposed to be an abbreviation for Efficient Performance) to include two-socket machines as well as the four-socket and larger machines that prior generations of Xeon MP processors were designed for.

Intel, no doubt, would have preferred to keep the Xeon DP and Xeon MP product lines more distinct, and charged a hefty premium for machines that needed expanded processor sockets or memory capability. But server makers and their customers were having none of that. With the rapid adoption of server virtualization and the need for larger memory footprints even for two-socket boxes, the Nehalem-EX processors have been tweaked so they can be used to support very fat memory configurations on even two-socket workhorse servers. This will eat into the volume Xeon 5500 and 5600 market, to be sure, but it is better to sell a Xeon 6500 or 7500 server in a two-socket box than have a customer dump Intel for AMD.

The Xeon 6500 and 7500 processors will also blur some lines between Xeon processors and the former "flagship" Itanium processors, which were supposed to take over the desktop and server arena starting a decade ago, but have been relegated mostly to high-end servers from HP running HP-UX, NonStop, and OpenVMS at this point in their history. The Itaniums were distinct in many ways from the Xeons, but the main distinction they held was better reliability, availability, and serviceability (RAS) features than Xeons had, and on par with mainframe, RISC, and other proprietary architectures from days done by.

Intel Nehalem-EX Die ShotThe eight-core Nehalem-EX Xeon 7500 beast

But at the launch event today in San Francisco, Kirk Skaugen, vice president of the Intel Architecture Group and general manager of its Data Center Group, made no bones about the fact that the Nehalem-EX processors and their related Boxboro chipset that is shared with the Itanium 9300 processors launched in early February have common RAS features.

The new chip, explained Skaugen, has 20 new RAS features, including extended page tables and virtual I/O capabilities as well as a function that is in mainframes, RISC iron, and Itaniums called machine check architecture recovery, which allows a server to have a double-bit error in main memory and cope with it without halting the system. With Windows, Solaris, and Linux supporting these RAS features, as well as VMware's ESX Server hypervisor, this makes servers based on the Xeon 7500s just as suitable a replacement for proprietary midrange and mainframe platforms and RISC/Unix servers as the formerly beloved Itaniums.

Skaugen said that the Nehalem-EX chips would allow server makers to create two-socket servers that support up to 512GB of main memory, nearly three times as much as AMD can do using 8GB DIMMs with the Magny-Cours Opteron 6100s announced yesterday. Intel will be able to support 1TB of main memory in a four-socket configuration, while the controller inside the Opteron 6100 only allows a four-socket machine using these chips to address a maximum of 512GB.

Skaugen rubbed it in a little that Intel's Nehalem-EX partners had over 50 new products in rack, tower, and blade form factors, and that it had 75 per cent more four-socket designs than with any prior server chip launch in its history. A dozen OEM partners have 15 different servers in the works that will span eight or more processor sockets, and apparently some are pushing their designs up to 16, 32, or 64 sockets.

The big bad box at the Nehalem-EX launch, of course, was the Altix UV massively parallel supercomputer, which El Reg told you all about last November. The Altix UV machines allow for up to 2,048 cores (that's 256 sockets and 128 two-socket blades) to be lashed together in a shared memory system suitable for running HPC codes. The shared global memory is not the same as a more tightly coupled symmetric multiprocessing (SMP) or non-uniform memory access (NUMA) cluster used in general purpose servers for running applications and databases. But that said, the Altix UVs are very powerful machines indeed and are intended to scale to petaflops of performance.

The Boxboro chipset that Intel is shipping as a companion to the Nehalem-EX chips supports configurations with two, four, or eight sockets gluelessly. If you want more sockets than that, you have to create your own chipsets, as HP, IBM, Silicon Graphics, and Bull have done for sure and others will no doubt follow.
But you can't just plug any old Nehalem-EX chip into any old configuration. That would be too simple, and Intel likes to charge premiums for features, like most capitalists. Take a gander at the feeds and speeds of the Nehalem-EX lineup:

Intel Nehalem EX TableThe Intel Nehalem-EX Xeon 7500 and 6500 processors

The first thing you will notice is that there are two different families of Nehalem-EX processors. The Xeon 7500s are aimed at general-purpose workloads and offer the most socket expandability. All of these chips can be used in two-socket or four-socket boxes, and some of them can be used in eight-socket or larger machines, too. The Xeon 6500s are cut-down versions of the chips that only work in two-socket boxes and that are specially tuned for the HPC market. These chips, explained Skaugen, were optimized to have the highest bytes per floating point operation ratio while minimizing the amount of node-to-node communication among the processors in the complex.

The top-end X7560 part has eight-cores spinning at 2.26GHz, has 24MB of L3 cache on the chip, and is rated at 130 watts using Intel's thermal design point (TDP) scale. The chip supports Turbo Boost, which allows for a core to have it cycle time jacked up if other cores are shut down when they're not being used, and it also supports Intel's HyperThreading simultaneous multithreading, which virtualizes the physical pipeline in the chip so it looks like two virtual pipelines to a system's operating system and its applications. In best-case scenarios, HT can boost performance of applications by around 30 per cent. In 1,000-unit trays, the per-chip price for the X7560 is a whopping $3,692. That is exactly what Intel charged for a dual-core Montvale Itanium 2 with 24MB of L3 cache.

The X7550 drops the clocks down to 2GHz, chops the L3 cache down to 18MB, and the price comes down to $2,729, which is exactly what Intel was charging for its top-bin six-core Dunnington Xeon X7460 processor running at 2.66GHz with 16MB of L3 cache. The next part down, the X7542, jacks the clocks up to 2.66GHz, drops the cache down to 18MB, cuts out HyperThreading, and reduces the core count down to six from eight; the price drops down to $1,980.

For that same $1,980 you can get a standard 105 watt part, the E7540, running at 2GHz with six cores and that same 18MB cache. If you are willing to take lower clock speeds, you can get even cheaper standard parts, the E7530 and E7520, which cost $1,391 and $856, respectively. Intel has also cooked up two low-voltage parts, the L7555 and L7545, running at 1.86GHz and rated at 95 watts, which have eight and six cores, respectively. These are reasonably pricey chips that will no doubt be used inside Nehalem-EX blade servers where a premium is expected in exchange for extra density.

Generally speaking, the Xeon 6500 processors are cheaper than their Xeon 7500 counterparts because they have some features and functions turned off, as El Reg predicted they would last fall. This is in keeping with the general philosophy that HPC shops are super-stingy and will not pay one extra penny for a feature they don't want and will never use.

The Nehalem-EX processors are implemented in 45 nanometer processes and have 2.3 billion transistors. ®

Source: http://newscri.be/link/1058499

Google mocks Steve Jobs with Chrome-Flash merger

Mountain View comes out of the plug-in closet
When Steve Jobs met Google boss Eric Schmidt for coffee late last week, they may or may not have reached some common ground on certain hot-button subjects. But odds are, they didn't see eye-on-eye on Adobe Flash. As Jobs prepares to ship his much ballyhooed Apple iPad without even the possibility of running Flash - which he calls "buggy," littered with security holes, and a "CPU hog" - Google is actually integrating the beleaguered plug-in with its Chrome browser.

With a blog post on Tuesday, Mountain View announced that Flash has been integrated with Chrome's developer build and that it plans to offer similar integration with its shipping browser as quickly as possible.

Google has been known to say that HTML5 is the way forward for internet applications. But clearly, it believes in the plug-in as well, and it has no intention of pushing all development into the browser proper.
"Just when we thought that Google was the champion of HTML5 they turn around and partner with Adobe on Flash to ensure that the web remains a mess of proprietary brain damage," one netizen said in response to Google's post.

Last summer, Google proposed a new browser plug-in API, and with today's blog post, it also said that Adobe and Mozilla have joined this effort. "Improving the traditional browser plug-in model will make it possible for plug-ins to be just as fast, stable, and secure as the browser’s HTML and JavaScript engines," the company said. "Over time this will enable HTML, Flash, and other plug-ins to be used together more seamlessly in rendering and scripting.

"These improvements will encourage innovation in both the HTML and plug-in landscapes, improving the web experience for users and developers alike."

What's more, Mountain View is developing a native code browser platform of its own, dubbed Native Client. This is already rolled into Chrome, and it will be an "important part" of the company's browser-based Chrome operating system, set for launch in the fall.

By integrating Flash with Chrome, Google said that it will ensure users always receive the lastest version of the plug-in and that it will automatically update the plug-in as needed via Chrome's existing update mechanism. And in the future, the company added, it will include Flash content in Chrome's "sandbox," which restricts the system privileges of Chrome's rendering engine in an effort to ward off attacks.

In July, with a post to the Mozilla wiki, Google proposed an update to the Netscape Plug-in Application Programming Interface (NPAPI), the API still in use with browsers like Chrome and Firefox, and both Adobe and Mozilla are now working to help define the update.

"The traditional browser plug-in model has enabled tremendous innovation on the web, but it also presents challenges for both plug-ins and browsers. The browser plug-in interface is loosely specified, limited in capability and varies across browsers and operating systems. This can lead to incompatibilities, reduction in performance and some security headaches," Google said today.

"This new API aims to address the shortcomings of the current browser plug-in model."
The new setup was developed in part to make it easier for developers to use NPAPI in tandem with Native Client. "This will allow pages to use Native Client modules for a number of the purposes that browser plugins are currently used for, while significantly increasing their safety," Google said when the new API was first announced.

Native Client and NPAPI have been brewing for months upon months, but today's Chrome announcement would seem to be a conscious answer to Steve Jobs' hard-and-fast stance on Flash. Presumably, the company sees this a way to ingratiate existing Flash shops who've been shunned by the Apple cult leader.
One of the many questions that remain is whether Chrome will give users the option of not installing Flash. With the new developer build - available here - you must enable integrated Flash with a command line flag. ®

Source: http://newscri.be/link/1058500

Tuesday, March 30, 2010

Google enhances website analytics

In its continuing quest to be more than just the world’s preferred search engine, Google recently added new features to its free website analysis program aimed at enterprises.

“Web Analytics is essentially a sophisticated website monitoring system,” said head of Web Analytics at Google South-East Asia Vinoaj Vijeyakumaar.

“Beyond just noting how many people visit your site, you can see what they do there and how much time they spend doing it.

“You can set and manage sales goals and receive automatic business reports based on those goals. This kind of intelligence can greatly improve productivity in any industry,” he said.

With the new enhancements, Google added about 20 preset goals to the Web Analytics repertoire.
In-depth intelligence reports have also been enhanced. However the company acknowledged that algorithms used for those reports will not be made publicly available.

To help enterprises get the most out of Web Analytics, Google has appointed “authorised consultants” who are certified by the company to train staff members in how to use the program.

“We have three authorised consultants based in Singapore and we hope to open one in Malaysia very soon,” said head of communications for Google South-East Asia Dickson Seow.

“Knowing how to use all the features in the most effective manner can help online traders stay ahead of the game.” For more information, surf to www.google.com/analytics.

By STEFAN NAIDU
intech@thestar.com.my

Sun's IBM-mainframe flower wilts under Oracle's hard gaze

Larry Ellison likes to buzz rotten fruit off some corporate type’s head. Over the years Microsoft, PeopleSoft, BEA Systems, SAP, and Red Hat have lined up to be been duly pelted during calls with Wall St or during Ellison's company's mega OpenWorld customer and partner conference.

It's all good theater in the crucible of Silicon Valley, but it's theater nonetheless, and a form of performance that will always have a shallow veneer. When there's money involved, you can say what you want about your rivals during a conference call - it's just words.
 
For example: almost two-thirds of SAP implementations run on Oracle's database, which means SAP - a company regularly pilloried by Ellison - actually translates into big money and helps keep Oracle's chief executive in yachts.

Turning to Oracle's acquisition of Sun Microsystems, then, it's with some justification that those people involved in technologies that were spun up by Sun during its era of a thousand blooming flowers and that have little visible business return on investment should now feel worried.

Users of Sun's Project Kenai hit the panic button recently after Oracle said it was bringing Sun's Web 2.0 code-hosting site in-house. Oracle U-turned, blaming a - ahem - "miscommunication".

The OpenSolaris community started screaming that it was being ignored by Oracle. The giant responded to say it wasn't ignoring them, it was just overworked getting its arms around the whole Sun thing.

To the ranks of the concerned, you can now add those working to put Solaris and OpenSolaris on IBM's Z-series mainframe. One Solaris on Z-series supporter contacted The Reg to say:

The SystemZ port of Solaris is dead. Oracle pulled all plugs and refused to further help the authors to help. Critical parts are closed parts of libc.so.1, the core user land library which has closed source parts. Oracle now refuses to give precompiled binaries of newer versions of the closed parts to the SystemZ port community, effectively ending this port because the missing bits cannot be replicated or bypassed.
Also concerned is David Boyes, president and chief technologist of Sine Nomine Associates - the engineering firm that helped put OpenSolaris on IBM's System Z mainframe in 2008. OpenSolaris was to become part of the main Solaris product.

Boyes told The Reg that the Sun employee working on the port has gone - chopped as the result of Ellison's Sun employee cull - and hasn't been replaced. Boyes is certain Oracle is not going to replace that person.
Oracle was unable to comment for this article.

On paper, the future is not too bright for Solaris or OpenSolaris on IBM's mainframe platform. In two years of the project's life, its been downloaded just 1,000 times - sometimes repeatedly by the same organizations. Otherwise, we're told there are "plenty" of proofs of concept.

Boyes told us it's wrong to say Oracle has "killed" OpenSolaris on IBM's mainframe, but he noted the future is up for grabs as Oracle is grooming through the old Sun's software and project assets and deciding what to do with them. The party line from Oracle here and during the recent EclipseCon and the Open Source Business Conference is that it's still working through projects and deciding what to do.

"This is all about politics and has noting to do with technology," Boyes said, angry that so much of this own company's time - 20,000 to 30,000 hours - dedicated to the project could have been for nothing. "Guys who worked on the Power and Intel work outside of Sun are pretty damn pissed," he said.

He added that while source code for OpenSolaris is still available and can still be enhanced, unless Oracle commits to putting Sun's operating system on IBM's Z mainframe he'll have to put it on the back burner. "It will no longer have the priority if they make it clear this is going nowhere, and we will have to reconsider what we are doing," Boyes said.

Boyes is right. This is political. Solaris has a future inside Oracle, on Exadata servers running Oracle's database. Where OpenSolaris fits into that is unclear.

As for Solaris on the platform of a competitor that Ellison has taken enormous pleasure in pelting since the Sun acquisition, well - if Ellison does kill it, it won't be for theatrical reasons. It'll be because he's decided he can't make any money by having his own software run on IBM hardware.

If you want a sign of how much things have changed under the new management even at this early stage, consider this lesson from another corner of the OpenSolaris and Solaris camp.

InfoWorld has reported that Oracle has tweaked the Solaris download license, so that you can no longer download Solaris for free. You can now only use Solaris for free as part of a 90-day trial if you purchase a service contract. Under that nice Sun - but slightly stoopid Sun - all you had to do was jump through the hoops of some online survey and make sure you were smart enough to give a working email address for the download.

Yes, the flowers are wilting and anything that survives under Oracle will only bloom if it can deliver a return on Sun's investment. ®