Friday, October 13, 2006

Top 500 supercomputers in the world

http://www.top500.org/sublist?edit%5Blist_id%5D=27&edit%5Bcountry_id%5D=all&edit
%5Bvendor_id%5D=all&edit%5Bregion_id%5D=all&edit%5Bcontinent_id%5D=all&edit%5
Bsegment_id%5D=all&edit%5Bapplication_id%5D=all&edit%5Barchitecture_id%5
D=all&edit%5Bconnfam_id%5D=all&edit%5Bconn_id%5D=all&edit%5Bprocfam_id%5D=all&
edit%5Bsystemfamily_id%5D=all&edit%5Brankf%5D=1&edit%5Brankt%5D=500
&op=Create+Sublist&edit%5Bform_id%5D=frmFields

27th Edition of TOP500 List of World’s Fastest Supercomputers Released

DOE/LLNL BlueGene/L and IBM gain Top Positions


MANNHEIM, Germany; KNOXVILLE, Tenn.; & – BERKELEY, Calif.; In what has become a closely watched event in the world of high-performance computing, the 27th edition of the TOP500 list of the world’s fastest supercomputers was released today (June 28, 2006) at the International Supercomputing Conference (ISC2006) in Dresden, Germany..

The new TOP500 list, as well as the previous 26 lists, can be found on the Web at http://www.top500.org/.

The No. 1 position was again claimed by the BlueGene/L System, a joint development of IBM and DOE’s National Nuclear Security Administration (NNSA) and installed at DOE’s Lawrence Livermore National Laboratory in Livermore, Calif. BlueGene/L also occupied the No. 1 position on the last three TOP500 lists. It has reached a Linpack benchmark performance of 280.6 TFlop/s (“teraflops” or trillions of calculations per second) and still remains the only system ever to exceed the level of 100 TFlop/s. This system is expected to remain the No. 1 Supercomputer in the world for the next few editions of the TOP500 list.

Even as processor frequencies seem to stall the performance improvements of full systems seen at the very high end of scientific computing shows no sign of slowing down. This time the last 158 systems on the list in June 2005 are too small to be included any longer, which represents a lower than average turn-over rate after two record breaking rates in the last lists. However, the growth of average performance remains stable and ahead of Moore’s Law.

Three of the TOP10 systems on the November 2005 TOP500 list were displaced by newly installed systems. The largest system in Europe is the new No. 5 at the Commissariat a l'Energie Atomique (CEA) in France. It is an Itanium based NovaScale 5160 system build by the French company Bull with 8704 processors and a Quadrics interconnect.

The largest system in Japan, a cluster integrated by NEC based on Sun Fire X64 with Opteron processors and an Infiniband interconnect, is installed at the Tokyo Institute of Technology and gained the No. 7 spot.

The German Forschungszentrum Juelich (FZJ) got to No. 8 with its new BlueGene system, which is now the second largest system in Europe. It is also the largest BlueGene system outside the US and the third largest in general.

The NEC-built Earth Simulator, which has a Linpack benchmark performance of 35.86 TFlop/s and had held the No. 1 position for five consecutive TOP500 lists before being replaced by BlueGene/L in November of 2004, has slipped now to already No. 10.

IBM remains the dominant vendor of supercomputers with almost half of the list (48.6 percent) carrying its label. Also, four of the TOP10 systems are from IBM. Hewlett-Packard (HP) remains unchallenged at the second position in this survey with 30.8 percent of all systems.

Intel microprocessors are at the heart of 301 of all 500 systems. Intel’s EM64T-based processors are very successful in the high performance computing (HPC) market place, with 118 systems using them already. AMD’s Opteron processors are also steadily and rapidly gaining ground, now with 81 systems using them compared to only 25 systems one year ago.

The U.S. is clearly the leading consumer of HPC systems with 298 of the 500 systems installed there. The European share continues to decline with now 83 systems down from 100 six month ago, while Asia mounted a turn-around with now 93 systems up from 66 six month ago.

Here are some highlights from the newest Top 500:

Only systems exceeding the 2.03 TFlop/s mark on the Linpack benchmark were qualified to make the list this time, compared to 1.17 TFlop/s one year ago. The last system on the latest list was listed at position 183 just one year ago.

The entry level for the TOP10 exceeds 35 TFlop/s and the entry point for the top 100 moved from 3.41 TFlop/s one year ago to 4.71 TFlop/s.
Total combined performance of all 500 systems on the list is now 2.79 PFlop/s (“petaflops” or thousand “teraflops”), compared to 1.69 PFlop/s one year ago.

Other trends of interest:

A total of 301 systems are now using Intel processors, with 118 one these are already using the EM64T processors. The second most-commonly used processors are the IBM Power processors (84 systems), just ahead of AMD Opteron processors (81).

There are 365 systems labeled as clusters, making this the most common architecture in the TOP500. Of these, 255 cluster systems are connected using Gigabit Ethernet and 87 system using Myricom’s Myrinet.

At present, IBM and Hewlett-Packard sell the bulk of systems at all performance levels of the TOP500. IBM remains the clear leader in the TOP500 list with 48.6 percent of systems and 54.3 percent of installed performance. HP is second with 30.8 percent of systems and 17.5 percent of performance. No other manufacturer is able to capture more than 5 percent in any category.The U.S. is clearly the leading consumer of HPC systems with 298 of the 500 systems installed there (up from 267 one year ago). The European is slightly decreasing to 83 systems while the Asian share is increasing to 93 systems, which puts it ahead of Europe again.Dominant countries in Asia are Japan with 29 systems and China almost equal with 28 systems.

In Europe, Germany (17 systems) lost further ground with the UK clearly ahead again (35 systems). One year ago Germany was in the lead with 40, compared to UK’s 32 systems.

The TOP500 list is compiled by Hans Meuer of the University of Mannheim, Germany; and Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.


IBM Easily Heads Top500 Supercomputer List

AMD's Opteron gains ground vs. last year

July 10, 2006 (Computerworld) -- The Top500 list of the world's fastest supercomputers, released late last month, showed IBM 's Blue Gene continuing to reign and Advanced Micro Devices Inc.'s Opteron processor powering more systems on the list than last year.

IBM's Blue Gene/L System, used at the U.S. Department of Energy's Lawrence Livermore National Laboratory, recently reached a Linpack benchmark performance of 280.6 trillion floating-point operations per second (TFLOPS) to easily top the list. No other system has yet passed the 100 TFLOPS mark.

IBM supercomputers accounted for about half of the list, with Hewlett-Packard Co. occupying nearly a third.

The Top500 Supercomputer Sites list is compiled by supercomputing experts Jack Dongarra at the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon at the National Energy Research Scientific Computing Center at the Lawrence Berkeley National Laboratory, and Hans Meuer of the University of Mannheim.

Fewer Changes

The Top500 list, known for its rapid turnover, showed fewer changes than usual this year. Some 158 systems were bumped from the latest list, compared with more than 200 systems displaced in the June 2005 list.

But in the fast-paced world of high-performance supercomputing, no systems maker can rest on its laurels for long.

"The thresholds to get into the top 50 move fast; machines are there one day, gone the next," said Herb Schultz, manager for IBM's Blue Gene. "It's no secret Blue Gene has been in the market for a little while and we're looking at ways to make the chips faster, get more chips on a core and do a faster job of interconnecting nodes," Schultz said.

Among other trends on the list: Intel Corp. microprocessors powered 301 of the systems, down from 333 last year, while AMD's Opteron processors gained some ground by running 81 systems, compared with just 25 one year ago.

Solheim is a reporter for the IDG News Service.


IBM to run Power6 server chip at 5 GHz

Says it cranked up the speed without sacrificing power efficiency

October 10, 2006 (IDG News Service) -- IBM plans to crank up the speed on its Power6 server chip to 5 GHz, far higher than competing processors from Intel Corp. and Sun Microsystems Inc.

Despite its high frequency, the chip will avoid overheating through its small, 65-nanometer-process geometry, high-bandwidth buses running as fast as 75GB per second and voltage thresholds as low as 0.8 volts, IBM said.

When it ships the chip in mid-2007, IBM will target users running powerful servers with two to 64 processors, said Brad McCredie, IBM's chief engineer for Power6. He shared details on the chip at the Fall Microprocessor Forum in San Jose.

By doubling the frequency of its current Power5 design, IBM is swimming against the current of recent chip designs that sacrifice frequency for power efficiency. Instead, IBM cut its power draw by making the chip more efficient, with improvements like computing floating-point decimals in hardware instead of software, he said.

The company hopes the Power6 will help it reach new customers in commercial database and transaction processing, in addition to typical users of its Power5 chip in financial and high-performance computing, such as airplane design and automotive crash simulation, McCredie said. To win that business, IBM will have to compete with chips such as Intel's Itanium 2, code-named "Montecito," and Sun's high-end Sparc processors.

If this chip works as promised, IBM could be successful in that effort, analysts say. IBM is one of the few remaining alternatives to Intel in the market for the so-called big-iron servers used in high-end jobs such as scientific computing, image processing, weather prediction and defense, said Jim Turley, principal analyst at Silicon Insider in Pacific Grove, Calif.

IBM upgraded its current midrange Unix servers in February from 1.9-GHz to 2.2-GHz Power5+ processors, targeting users of large databases, enterprise resource planning and customer relationship management applications. The company will ship several versions of the Power6 chip, ranging from 4 to 5 GHz in frequency.


Reprinted with permission from

For more news from IDG visit IDG.net
Story copyright 2006 International Data Group. All rights reserved.

Fiber to the home: 'It's insanely better'

Cable too slow? Try blazing-fast optical fiber.

October 12, 2006 (Computerworld) -- Some call it fiber to the home (FTTH) or fiber to the premises (FTTP) or fiber to the node (FTTN) or fiber to the curb (FTTC) or fiber to wherever (FTTX).

But Jared Wray, an IT consultant in Seattle, has a simpler description.

"It's insanely better," he said. "I downloaded Microsoft SQL Server Service Pack 1, a 252MB file, in two minutes, 30 seconds. It's to the point where the speed of your throughput is gated by the speed of the remote server, rather than your local interface."

For $40 per month more than he had been paying for a 6Mbit/sec. cable modem connection, he became an early user of an FTTH service from Verizon Communications Inc. called FiOS, with 30Mbit/sec. downstream and 5Mbit/sec. upstream. Instead of a pair of copper wires connecting his house to the telco central office, a fiber-optic strand was laid to his house.

Replacing the backbone

Basically, the phone companies have replaced their backbone networks with fiber, and are, in an increasing number of instances, extending that fiber to individual subscribers, affording them unprecedented data speed, offering exciting possibilities for telecommuters. Indeed, pundits look forward to a day when copper phone wires will be found only in museums, and the availability of enormous amounts of bandwidth will give birth to applications and services currently undreamed of.

As of September, there were 1.01 million FTTX users in North America, said Mike Render, a consultant at RVA Market Research in Tulsa, Okla. That's a major leap from the 332,700 subscribers counted in 2005, or the 146,500 counted in 2004. Almost as interesting to telecom analysts is the number of homes passed (i.e., the necessary fiber has been laid in front of the house.) That number has also ballooned, reaching 6,099,000 in September, up from 2.7 million in 2005 and 970,000 in 2004.

Wray's carrier, Verizon, is the single largest FTTX carrier in North America. Verizon spokesman Mark Marchand in Basking Ridge, N.J., said the telco plans to pass 6 million homes with fiber by the end of this year and will continue to pass 3 million additional homes yearly through at least 2011. By then about half of Verizon's 30 million home market base will have been passed.

"We want to build a network that would not just cover us for the next four or five years, but be future-proof," he said. He noted that reaching higher speeds only requires putting new electronics on either end of the fiber.

Verizon FiOS subscribers are offered downstream speeds of 5, 15, 30 and (in selected markets) 50Mbit/sec., using technology called broadband passive optical network (BPON) with a 322Mbit/sec. channel that is optically split 32 ways. Marchand estimated that the system could support 100M bit/sec. subscriber connections, but that speed is not being marketed yet.

In 80 cities where video franchises have been acquired, Verizon also offers a digital TV network, using an 860M bit/sec. channel on a different frequency, so that the video bandwidth doesn't reduce the available data bandwidth.

As of Aug. 1, 375,000 homes had subscribed to FiOS, said Marchand, who noted that an average of 12% of local broadband users switch to FiOS during its first nine months of availability in a specific market. No sales figures for the related digital TV service were available.

Next year Verizon will switch to Gigabit passive optical network (GPON) for new construction, with a backbone channel of 2.4Gbit/sec. downstream and 1.2Gbit/sec. upstream, Marchand added.

Fiber to the node

But outside Verizon, fiber often means FTTN (fiber to the node,) where fiber is run to a neighborhood interface box, and the subscribers are served from there using repurposed copper wires, usually through high-speed short-range versions of Digital Subscriber Line (DSL). FTTN variants are called active networks, since they rely on active electronics rather than passive fiber.

The most visible FTTN example appears to be AT&T Inc. (formerly SBC Communications Inc.) with its U-verse offering (although, like Verizon, AT&T is laying pure fiber for new housing developments.) Using Very High Speed Digital Subscriber Line (VDSL), customers can receive at least 25M bit/sec., explained AT&T spokesman Wes Warnock in San Antonio. The bandwidth includes a data channel running at 6M bit/sec. downstream, and the rest is devoted to digital TV, and should be enough for four different simultaneous standard TV channels, he said.

Although it's slated to be available in at least 15 cities by year's end, U-verse is currently available only in San Antonio, where subscriber Alan Weinkrantz, a high-tech publicist, said he was less impressed with the higher data speed than with the user interface for the TV system.

"Channels change faster and the interface and user experience is superior -- I enjoy doing my own programming," he said. "I could say that the data is faster by a certain factor, but I don't notice it when I do e-mail."

Among other major carriers, BellSouth Corp. has long practiced FTTC (fiber to the curb), and it's planning high-speed DSL to compete with fiber, with a speed of about 24M bit/sec. And Qwest Communications International Inc. has been installing FTTH in new developments.

Local level

Meanwhile, there is a lot of fiber activity at the local level, with communities not waiting for their telco to lay fiber, or partnering to make sure it happens, especially in upscale housing developments.

An April survey published by the FTTH Council listed 936 U.S. communities that were being served with fiber networks. Of those, 377 were installed by regional Bell operating companies, 270 by other incumbent local exchange carriers, 165 by competitive local exchange carriers (CLEC), 84 by partnerships between developers and CLECs, 30 by municipalities and 10 by public utility districts. Most used passive optical networking.

There's also a lot of activity overseas, noted Jeff Heynen, an analyst at Infonetics Research in Campbell, Calif. In fact, both North America and Europe lag so far behind the Pacific Rim in terms of fiber adoption that they may never catch up, he said. Reliance on high-rise apartments in Asia means that the fiber only has to reach the building to serve multiple subscribers, while in North America and Europe it often has to be laid to each house, he explained.

As a result, there are about 6 million fiber users in Asia, as opposed to 680,000 in Europe and (as stated) about a million in North America, Heynen noted. He predicted the world total will grow to 38 million by 2009.

As for a complete switchover from copper to fiber, "That will take a long time, since there is a big installed base of DSL," Heynen said. And DSL may manage to remain competitive with copper longer than anyone had imagined -- there are already versions that can reach 100M bit/sec., he noted.

Unbeatable speed, theoretically

But eventually, fiber is fated to win any performance wars, since (according to research at Bell Labs done in 2001) the theoretical capacity of a strand of fiber is thought to be 100 trillion bits per second, or a million times faster than the maximum of 100M bit/sec. being contemplated for today's FTTH services.

But fiber will have had a big impact long before that happens. Heynen foresees social networks like MySpace being based on video exchanges, rather than today's text and pictures. When software can be downloaded at speeds approaching local disk access, the use of embedded operating systems and packaged applications may fade away as users switch to thin clients, he predicted.

"By 2010 people will begin creating applications that we have not even thought about today, perhaps involving tele-medicine or tele-health," predicted Elroy Jopling, an analyst at Gartner Inc. "It will be like the old days of the PC -- when the RAM doubled, people found a use for it."

Other benefits can be found in the present, indicated Joe Savage, president of the FTTH Council in Portland, Ore. Surveys of FTTH users show they spend an average of one additional day per month telecommuting, and they feel that the connection increases the market value of their home by 1%, he said.

Also, he lists decisions by various corporations to locate in communities with fiber because they knew the employees could work from home.

Back in Seattle, Wray can hardly contain his enthusiasm for FTTH. "I love it -- it's the greatest thing I've ever had," he concluded.

More information:

Lamont Wood is a freelance writer in San Antonio.

Symantec to add encryption, better recovery to backup

Users are beta-testing the product now for a November/December launch

October 12, 2006 (Computerworld) -- Symantec Corp. is expected to announce before the end of the year a new version of its Backup Exec backup and recovery software that includes encryption and more granular recovery capability for Microsoft applications, according to beta testers and other users familiar with the product.

Backup Exec 11 has been undergoing beta testing since this spring, with some users reporting having tested it in April and May, and others still beta-testing it now. The existing version, Backup Exec 10d, was announced in January 2005, and Symantec typically operates on a two-year upgrade window. Symantec officials would not comment on the product.

One user, who asked not to be named, said he had tested the product this spring and said it included granular Active Directory restore, which could allow a storage administrator to restore a single deleted user without having to restore all of Active Directory. Similar granular restore capabilities are going to be available for Microsoft servers such as Exchange and SQL Server, he said. Other new features the user reported included distributed catalogs, which enable users -- perhaps in a remote office -- to perform data restores if the centralized administration server isn't available. In addition, the product supports Symantec's LiveUpdate feature.

Gary Cannon, president of Advanced Internet Security Inc., said the new version, which he expects to come out in November or December, includes continuous data protection for Exchange that would eliminate the daily backup window. The Colorado Springs-based systems integrator is beta-testing the software. In addition, Cannon said, users will be able to restore Exchange messages and folders and individual SharePoint Server documents, as well as perform SQL snapshots.

Jeremy Burton, Symantec's group president of enterprise security and data management, had said at Symantec's Vision conference in May that the company was working on a sequel to the current 10d version of Backup Exec called Eagle that incorporated continuous data protection and live state recovery capabilities into a single project that would ship in early 2007 (see "Symantec announces continuous data protection products"). This would provide continuous snapshots that would let users recover data from any point in time to any point in time, he said.

Cannon said the product included the ability to encrypt backups. Such functionality is available in the more expensive NetBackup but not currently in Backup Exec. Bob Stump, lead NetBackup administrator for the state of Michigan, said he had also heard that the NetBackup encryption functionality would be available in Backup Exec and noted that he knew of many users who had migrated from Backup Exec to NetBackup to get that functionality.

Users on Symantec's Veritas Forums Web site who were familiar with the product also confirmed that it will have encryption, as well as eliminate a fragmentation problem that was present in Version 10d.