Wednesday, February 21, 2007

a good animation should be

SNW: New storage standard to include metadata search

The eXtensible Access Method is focused on searching fixed content
Lucas Mearian

April 03, 2006 (Computerworld) -- SAN DIEGO -- The Storage Networking Industry Association (SNIA) announced today that it is well on its way to developing an interface standard that would allow companies to perform internal searches for any data using Google-like tools, based on metadata associated with a file, image, audio file, database or even e-mail.

The proposed standard, called Extensible Access Method, or XAM, is focused on searching fixed content and is expected to allow users to find information across multivendor disk and tape systems to retrieve data requested by regulators or for legal discovery purposes.

“If you’ve got 19 days to provide information to someone, you can use these common API sets to access the data,” Matt Brisse, technology strategist and vice chairman of the board for SNIA, said at Storage Networking World here today.

The standard could also allow a hospital to retrieve a patient’s old X-rays, as well as any electronic documents associated with it, such as doctors’ notes.

Suzie Dahle, CIO of DXP Enterprises, Inc., said being able to search data and restore it piecemeal versus having to restore an entire database, would greatly reduce the labor involved with data restores.

Brisse said 36 of SNIA’s member companies are working on the XAM interface, “so this is a full-court press.” SNIA’s Fixed Content Aware Storage Technical Working Group expects to demonstrate the standard in early 2007.

Ray Dunn, a member of SNIA’s board of directors, said the group is working on three separate updated versions of the Storage Management Initiative Specification, or SMI-S, which defines the way multivendor systems communicate with each other. SNIA is currently working to get versions of SMI-S ratified as an international standard by the International Standards Organization.

Dunn said Version 1.02 is being reviewed by SNIA members, and Version 1.03 has just been ratified by the American National Standards Institute (ANSI) and is being pushed to the International Standards Organization for ratification. Version 1.1 is on track to be submitted to the InterNational Committee for Information Technology Standards for ratification as an ANSI standard.

Dunn said SNIA is particularly focused on Version 1.1 of SMI-S, which defines interfaces between network-attached storage and iSCSI-based devices. SMI-S v1.1 deals with device descriptions and the services associated with them, such as copying data from one array to another.

“It will have the capability of copying data from one host to another, regardless of the vendor,” Dunn said.

DXP Enterprises, which distributes maintenance, repair, and operating equipment and products to inudstries such as oil and gas companies, recently installed a disaster recovery architecture that includes NAS arrays that replicate data between two sites 200 miles apart.

Dahle said she's happy to hear the SNIA is developing replication standards "because while you can get the data over there, it doesn’t necessarily mean it’s usable or it’s right. You have to be able to work on both sides of that."
Blogs by SNW Attendees:


Microsoft, Dell team up on NAS/iSCSI array

The array is aimed at competing with NetApp's FAS250 array
Lucas Mearian

December 06, 2006 (Computerworld) -- Dell Inc. and Microsoft Corp. today announced a storage array that can serve up either file or block-based data and has software that supports features such as data snapshots and replication.

Designed for small and midsize businesses, the Dell PowerVault NX1950 comes loaded with Microsoft Windows Unified Data Storage Server 2003, offering file server and IP storage-area network (SAN) support based on the iSCSI protocol. The PowerVault NX1950 comes in single or two-node cluster configurations and includes a redundant back-end storage array -- the new Dell MD1000 -- scaling up to 45 serial-attached SCSI (SAS) drives and 13.5TB capacity. The MD1000 supports up to four host servers.

Microsoft obtained the iSCSI driver for the array from its acquisition of String Bean Software in June (see "Microsoft bites into String Bean for iSCSI technology").

The array also supports CIFS (Windows) and NFS (Linux, Unix, and Macintosh) protocols, and it has security and mapping functions familiar to Windows and Unix administrators. The NX1950 also comes with Windows Storage Server 2003 R2 capabilities, such as single-instance storage, full indexed text search, distributed file services, and management of user quotas, file screening and storage reports.

Eric Endebrock, senior manager of Dell Enterprise Storage, said the array's SAS drives offer a combined 92,000 I/O per second from four SAS ports. Endebrock said the NX1950 is designed to compete against Network Appliance Inc.'s FAS250 and FAS270 arrays.

"We're getting great performance for a cluster that can be very highly available," Endebrock said. "It's very affordable for customers to get into using the serial-attached SCSI drives.

Bala Kasiviswanathan, group product manager for Windows Storage at Microsoft, said administrators can manage their volumes, create shares for NFS or CIFS and manage their iSCSI target all from a single interface.

Another first for Microsoft is the array's automated clustering capability. "So people can set up multinode clusters of these devices and really have a highly available solution," Kasiviswanathan said, adding that the clusters scale up to four nodes.

"Last but not least, we've added remote management capability for this box from a non-Windows client," he said.

Early in 2007, Dell and Microsoft plan an upgrade to the NX1950 that will allow greater clustering and drive expansion, as well as adding Fibre Channel SAN gateway capabilities.

The NX1950 starts at about $17,000. A 4.5TB configured array starts at about $24,000.

Iomega terabyte-size NAS server sports swappable drives

Peter Cohen

January 23, 2007 (MacCentral) -- Iomega Corp. today announced the availability of its StorCenter Pro NAS 150d, a network-attached storage server that features 1TB of storage capacity and hot-swappable hard disk drive mechanisms. It is priced at $799.

The 1TB model features four hot-swappable SATA II 250GB hard disk drive mechanisms. A 2TB version that uses 500GB mechanisms is available for $1,499. You can configure the server using RAID Level 0 (striping) for improved performance, RAID Level 1 (mirroring) for redundancy or RAID Level 5 (striping plus parity) for maximum security.

The StorCenter Pro NAS 150d has a Gigabit Ethernet port and a USB 2.0 port for sharing up to four printers on the network. It supports Mac OS X, Windows and Linux operating systems. A five-user license of EMC Retrospect Express is also included, for network-based backups.

Mac system requirements call for Mac OS X v10.2.7 or later.

HP integrates Cisco switch, adds encryption storage arrays

HP added encryption to its Data Protector Software
Deni Connor

February 07, 2007 (Network World) -- Hewlett-Packard Co. this week launched an enhanced midrange storage array, a new Fibre Channel switch for its BladeSystem servers, a network-attached storage (NAS) gateway and encryption for its data protection software.

At the HP Asia Pacific StorageWorks Conference in Ho Chi Minh City, Vietnam, the company added the ability to handle both application or block data and file data to its Enterprise Virtual Array family of arrays – the EVA4000, EVA6000 and EVA8000 in Windows or Linux environments. Called EVA File Services, the software lets customers virtualize their storage into a single pool that can be managed centrally. HP’s EVA File Services is a combination of its EVA arrays and file server clustering software it resells from PolyServe. The product, available in a two-node starter kit, is $90,000. It is expected to ship next month.

The Fibre Channel switch – the Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem -- is a 4Gbit/sec switch that fits in the BladeSystem enclosure and shares power and cooling with the server blades. The switch is available in 12- or 24-port configurations. Expected to be available next month, it costs $6,000 and $9,500, respectively.

The HP ProLiant DL585 G2 Storage Server is a NAS gateway that provides multiple protocols including iSCSI connectivity and features Serial Advanced Technology Attachment and Serial Attached SCSI drives. It is intended to be installed as a front-end device to a storage-area network (SAN) and provide file and print serving. The DL585 uses the Microsoft Windows Unified Data Storage Server 2003 operating system, which allows for combined SAN and NAS connectivity.

The G2 Storage Server is the second array to use Microsoft’s storage technology. Dell introduced the Dell PowerVault NX1950 in December 2006. The G2 Storage Server starts at $19,000.

Further, the company added 256-bit Advanced Encryption Standard encryption to its Data Protector Software to secure data in flight and at rest. HP Data Protector Software automates the backup and recovery of data from disk or tape over distance for disaster recovery purposes. Data Protector Software starts at $490 per system backed up.

EqualLogic adds to SAS storage array product line

Technology automatically load-balances between storage arrays
Robert Mullins

February 20, 2007 (IDG News Service) -- Storage appliance maker EqualLogic Inc. says its newest product improves performance by doing a better job of balancing the data load among storage arrays in a network.

EqualLogic Tuesday introduced the PS3900XV, the third model in the PS3000 line that debuted last September. The product is a Serial Attached SCSI (SAS) array and is designed to store data from high-demand applications such as Oracle or Microsoft SQL databases, or the Microsoft Exchange messaging and collaboration platform.

Like the previous PS3800VX, the 3900 operates at up to 15,000 rpm but features larger storage capacity -- 4.8TB, versus 2.3TB for the 3800.

EqualLogic says its SAS storage array can support more end users yet requires less storage capacity, based on the Fibre Channel protocol for transferring data, which is faster and has higher capacity than SCSI, said John Joseph, vice president of marketing for EqualLogic.

"There is a lot of concern in the industry about 1GB [SCSI] Ethernet and how it competes against 4GB Fibre Channel. Well, the wire is not the bottleneck," said Joseph.

EqualLogic said it moves data faster across its storage arrays by automatically balancing the load among storage devices.

"We are able to load-balance the workload ... without the user ever having to click on a window to reconfigure or rebalance or retune, or do anything manually," Joseph said.

Gartner Inc. forecasts that SAS unit shipments will surpass those of Fibre Channel in 2008 and continue to grow faster. Even though Fibre Channel offers better performance than SCSI, SCSI is cheaper and is a more well-known protocol, said Gartner analyst Stanley Zaffos.

SAS unit growth will be strongest in the small-to-midsize business market as those companies' storage needs grow, said Zaffos.

According to 2006 research from IDC, EqualLogic holds a 19% market share for SAS disks, based on units shipped, second only to Network Appliance Inc. It also competes with storage start-ups such as LeftHand Networks Inc. and Nexsan Technologies Inc.

The PS3900XV, which will begin shipping in March, carries a list price of $67,000, although Joseph said it will be more expensive in international markets.

Dell, EMC ship SMB storage with more kick for the buck

February 20, 2007 (Network World) -- Dell Inc. today unveiled storage systems for small and midsize businesses that let customers deploy storage-area networking and tape automation technology more affordably.

The Dell/EMC CX3-10 is a entry-level storage system that can attach to either the SAN via Fibre Channel or Gigabit Ethernet network via iSCSI. It accommodates either high-capacity Serial Advanced Technology Attachment (ATA) or high-performing Fibre Channel drives, which can be intermixed in the same array for a total capacity of 30TB. The 1U (1.75-inch) high storage processor contains four Fibre Channel and four iSCSI connections.

The Dell/EMC CX3-10 is shipped with PowerPath failover and management software factory installed. It is managed locally or remote via the Navisphere Task Bar software. The Dell/EMC CX3-10 supports as many as eight directly connected or six SAN connected Windows, Linux, Solaris, HP-UX or AIX host computers.

Using 500GB Serial ATA or 73GB, 146GB or 300GB Fibre Channel drives, the CX3-10 starts at $22,000.

The PowerVault TL2000 and the PowerVault TL4000 tape libraries support LTO-3 (Limited Tape Open) media. Priced the same as tape autoloaders, the TL2000 and TL4000 let SMB customers consolidate their data protection and archiving environments.

The PowerVault TL2000 and TL4000 hold 44 tape cartridges for a total capacity ranging from 9.6TB to 17.6TB. They are available in 2U (3.5-inch) high or 4U (7-inch) high configurations. Each tape library is rack-mountable and customer installable. The TL series starts at $9,300.

Sunday, February 18, 2007

something of importance

http://www.iese.uiuc.edu/pdlab/Papers/Customization.pdf

this is one of the best sites for design and IT.....

check it out

Thursday, February 15, 2007

Wednesday, February 07, 2007

Enter the 'Penryn' processors

Intel's 45nm process will manifest itself in a microprocessor architecture known only by the code name "Penryn." Not surprisingly, Intel has kept a fairly tight lid on Penryn, but based on rumors and speculation by analysts and experts, it appears that these processors will be based on the Core 2 architecture, but will take advantage of the 45nm processor to provide larger L2 caches and increased performance. (It's worth noting that Penryn will also serve as Intel's mobile processor architecture, with laptop CPUs scheduled for release in early 2008.)

In terms of specific processor releases, Computerworld has heard of a few Penryn-based CPUs that should be released in late 2007. Two dual-core, single-die processors known as "Ridgefield" and "Wolfdale," respectively, could be released as early as the third quarter of 2007. There has been no concrete information regarding the clock speeds of these two processors, but reliable early information has indicated that the Ridgefield processor will have 3MB of shared L2 cache, while the Wolfdale variant will have 6MB of shared L2 cache.

One of Intel's most potentially exciting desktop CPUs is code-named "Yorkfield" and appears to be a 45nm-process quad-core processor that uses a single die (referred to as "native" quad-core) and has an astonishing 12MB of shared L2 cache. When combined with the performance-per-watt advantages of the 45nm processor, this could be Intel's extreme high-end CPU of the year if it is released on schedule at the end of Q3 or the beginning of Q4 2007.

Finally, although the company has not confirmed this in any way, it's entirely plausible that Intel could combine two Yorkfield processors at the end of 2007 to create an octo-core, dual-die, 24MB L2 cache monster.

All the Penryn processors described above will be compatible with Intel's new Bearlake chip set.

As a teaser to what may come beyond 2007, rumors have swirled around a future-gen Intel microprocessor architecture code-named "Nehalem" that will be released in 2008. No details on this architecture have been revealed to date.

Smaller, faster, cooler, more efficient: The 2007 mobile CPU road map

AMD is pitting an innovative new CPU design against Intel's new Centrino platform and 45nm fabrication process. The mobile chip wars are hotter than ever.

February 06, 2007 (Computerworld) -- Ask anyone to name their No. 1 laptop grievance and you'll repeatedly hear two words: "battery power." In an era when seemingly everyone is switching from desktops to laptops, the inability to compute for more than four or five straight hours without being plugged in feels outdated.

Thankfully, it appears that market leaders Intel Corp. and Advanced Micro Devices Inc. (AMD) understand this shortcoming and are moving to address it. Both last year and this year, the all-important notion of performance-per-watt has dominated the spotlight. Greater performance-per-watt results in cooler inside temperatures and increased power efficiencies. For laptops, both of these elements are critical, and addressing them usually translates into longer battery life.

Typically, laptop processor speeds have increased at the cost of battery life. However, with last year's release of its mobile Core 2 Duo processors, Intel made great strides in increasing performance while decreasing power consumption. But can the chip giant keep it up?

Considering how important the mobile computing category is to overall profits, it's clear that AMD will have to deliver some substantially better products to make a dent in Intel's dominance. Can AMD deliver?

Keep reading for details -- including a surprising and novel approach to CPU design by AMD.

Editor's Note: Looking for information on desktop, rather than mobile, CPUs? See "Beyond Dual Core: 2007 Desktop CPU Road Map."

Beyond Dual Core: 2007 Desktop CPU Road Map


January 01, 2007 (Computerworld) -- What a difference a year makes. One year ago, we were dazed, dazzled and beguiled by the arrival of dual-core processors. Offerings from Intel Corp. and Advanced Micro Devices Inc. had analysts, journalists, IT professionals and enthusiasts all gushing with praise for a bright new multitasking future.

Amazingly, both Intel and AMD were able to deliver on the potential of dual-core processing. Throughout 2006, desktop PCs played host to a series of processors that, while slower at the clock-speed level, were faster in real-life usage, allowing for unprecedented amounts of multitasking. (For more about both companies' current lineups of desktop CPUs, see our CPU Buyer's Guide.)


As the calendar flips to 2007, we are firmly entrenched in the world of multicore processors. And based upon the confidential road maps of both Intel and AMD, it is clear that dual-core CPUs are only the launching point for the future of the microprocessor. In 2007, quad cores and even eight-core CPUs will be available. By 2009, there's a good chance that sixteen-core processors will be on the market.

As we enter 2007, five key questions regarding the pending year's CPU battle are on our minds:

  1. Will AMD be able to continue its dominance in the U.S. desktop market?
  2. How will Intel capitalize upon the success of Core 2?
  3. Will AMD be able to match the success of Intel's Core 2 processors?
  4. When will the market see true quad-core and even eight-core processors?
  5. What surprises do the chip makers have up their sleeves?

With all this in mind, we're taking an extended look at the processors and processor trends you can expect to see in 2007. Not surprisingly, neither AMD nor Intel was willing to divulge many specifics regarding their CPU releases for the coming year. So we scoured the Net, pored over statements from both companies and dug into reports from the host of analysts and experts who cover them.

It's worth noting that much of the information in this road map is preliminary and code-name-level information. As such, the specifics of the processors could change in coming months.

All secrets are revealed within.






FiberCoTM

National Research and Education Fiber Company

Internet2 has established the National Research and Education Fiber Company (FiberCo) to support regional fiber optical networking initiatives dedicated to research and higher education.

FiberCo supports the Internet2 community's goals of developing and deploying advanced network applications and technologies, and complements existing Internet2 network infrastructure by providing a means for acquiring, holding, and distributing fiber optic network assets. FiberCo helps Internet2 meet a critical objective by facilitating the ongoing development of regional optical networking initiatives around the country.

The assets allocated by FiberCo are expected to enable testing of a wide variety of highly advanced network applications, including uncompressed high-definition television quality video, remote control of scientific instruments such as mountaintop telescopes and electron microscopes, collaboration using immersive virtual reality, and grid computing.

FiberCo's™ extensible allocation includes over 10,000 route miles of dark fiber acquired from Level 3 Communications, Inc. Questions concerning FiberCo may be directed via e-mail to fiberco@internet2.edu.

Internet2 Network: Architecture

The architecture of the new Internet2 Network consists of: the national footprint; the connection to the regional optical networks (RONs) and the equipment and interfaces to support those connections; and the ability to support services to the research and education community, including campus researchers and community wide participants.

The hierarchy of the Internet2 Network architecture is similar to previous networks: backbone to regional to campus. However, the network itself is completely different. It is no longer an IP network like those found in the commercial sectors of the Internet. Rather, it is a hybrid network that supports layer 1 dynamic and static services along with innovative new techniques at layer 3. Its most important feature is the ability to experiment with new protocols and ideas for the research and education community.

In defining the architecture, several crucial design goals were considered, including but not limited to:

  • Serve all Internet2 members – campuses, affiliates, sponsored participants, SEGPs, and corporate participants – and enhance the ability to serve a wider community. All current RONs are to be supported.
  • A hybrid network capable of point-to-point services together with an IP network.
  • Every connector (RON) attaches to a backbone ring at a metro location not requiring extensive backhaul.
  • The community retains complete control of the layer 1 optical system, including the provisioning and switching of wavelengths and sub channels.
  • The community does not have to concentrate on reliability and sparing. The carrier is responsible for an SLA. The carrier provides the operational support, allowing the community to focus on networking. Control of the system is left to the community, allowing the development of new and dynamic services.
  • The system is capable of supporting network research in a wide variety of ways, including the ability to set up networks at will for research communities.
  • The system is provisioned on fiber that is used by the community and maintained by the carrier. There is significant financial advantage to the community, if the community provides the fiber under an IRU with the carrier.

That the system lies on dedicated fiber has substantial advantages. Dedicated wave equipment currently used by Level 3 will provide the waves for the system. That wave system is provided by Infinera, which supports advanced technologies with substantial add/drop capabilities and significant advantages in provisioning and redundancy. Provisioning a static wave is accomplished by simply installing the endpoint interfaces. This is far different from most current optical systems where many interfaces must be installed along the entire path. Moreover, since the system is completely dedicated to Internet2, it is possible to leap beyond the carrier’s standard offerings to utilize the advanced technology provided by Infinera.

The system will also incorporate a grooming capability – the ability to provide sub channels through waves using either an Ethernet or advanced SONET infrastructure.

The Ciena CoreDirector multiservice switching system will provide point-to-point services down to the campuses through the RONs. The goal is to provide lightpath capabilities provisioned within seconds that last for durations of at least hours.

The topology consists of optical nodes connected by waves maintained by Level 3 that connect to RONs and other participants. The carrier footprint primarily determines the topology of the network, but the use of the Infinera gear allows for great variability in drop/add locations.

The new Internet2 Network design encourages aggregation at the RON layer in the hierarchy.

The IP network, corresponding to the current Abilene backbone, is built on top of the optical network. The IP backbone is provisioned across the waves in the system, and each RON connects to the IP network through the optical system.

Since the carrier provides an SLA for the waves on the system, the IP network will have carrier-quality provisioning, which is expected to be minimally three 9’s (99.9%) uptime, but is likely to be closer to five 9’s (99.999%).

There are substantial redundancy options. The Infinera platform can provide control plane redundancy for IP connectors, and SONET restoration is also a possibility. These options will be determined by consulting the community as a whole.

The IP network will initially use Internet2's existing Juniper routers, which remain state of the art and are capable of migrating to 40 Gbps services.

The architecture provides the capability of providing a variety of circuit (or lightpath) services on the new Internet2 Network, but the agreement provides additional wave provisioning capabilities on the entirety of the carrier’s footprint. Services provisioned on the new network:

  • Short Term and Long Term dynamically provisioned circuits
  • Long term static, full wave services
  • "Off net" services provided by the carrier through “WaveCo,” provisioning circuits – optical carrier and digital signal – for regional networks and other Internet2 participants needing to extend connectivity anywhere the Level 3 optical network reaches. The “off-net” service will be provided at cost-effective, aggregated rates.

Engineering support for the new network will be drawn from members of the community and the Internet2 staff. Building on our experience in designing, developing, and maintaining Abilene, HOPI, and NLR, Internet2 expects operational support to fall into three broad categories: control plane activities and dynamic provision of basic services; application and advanced services support in hybrid networks; and engineering, monitoring and management.

internet 2 out now..... detail analysis

source: http://www.internet2.edu/network/

CHARACTERISTICS:

The new Internet2 Network will be deployed nationally over a 13,000-miles dedicated fiber optical backbone, providing dynamic and static wavelength services along with existing enhanced IP capabilities.

  • Advantage of economies of scale, leveraging Level 3's next-generation optical infrastructure and capital investments
  • Extends capacity upgrades and much greater bandwidth flexibility through Infinera's leading-edge optical platform and Ciena's multiservice optical switch capabilities
  • Short-term and long-term dedicated circuits for individual and institutional capacity needs
  • Dynamic circuit provisioning within seconds and, in the future, will provide advance reservations of circuits to support an even broader range of applications and research
  • Static wave provisioning in 50 megabits per second (Mbps) increment sub-wavelength services to multiple 10 gigabits per second (Gbps) wavelengths to allow connectors to more efficiently allocate bandwidth
  • Dense wavelength division multiplexing capabilities to build separate logical networks over the same fiber facility

Initial Deployment: 100 Gbps of capacity along the entire network footprint

Future Capacity: Potential migration to 40 Gbps and 100 Gbps interfaces

Wavelength Scalability: Unlimited availability of additional wavelengths without the requirement for large capital expenditures

Reliability: Carrier-class standard agreement for the underlying network infrastructure

Flexibility: Support for sub-wavelength (50 Mbps increments) dynamic provisioning across every wave on the network backbone for better bandwidth allocation

Community Control: Provisioning and switching of wavelengths and subchannels

Internet2's agreement with Level 3 allows the community to have complete control of the entire network infrastructure without having to take on the burdens and costs of maintaining the underlying fiber and related facilities. In essence, Internet2 members have the advantages of ownership without the costs and can implement faster services or new technologies when needed, independent of the carrier. At the same time, Level 3 maintains responsibility for monitoring the optical infrastructure to provide carrier-class reliability that supports the research and education community’s critical work.

Internet2 Engineering and Community Support:

  • Control Plane aspects of dynamic provisioning, supported by Mid-Atlantic Crossroads through the NSF DRAGON project and by the Internet2 HOPI project
  • Application and Advanced Services Support, targeting key applications for the research community, such as eVLBI, as well as telemedicine
  • Engineering, Monitoring, and Management, supported by the Global Network Operations Center (NOC) at Indiana University
  • Internet2 Observatory will be expanded to enable data collection at all layers, with datasets available to network researchers; also, support for equipment colocation in optical nodes

Saturday, February 03, 2007

Glitch hampers Vista family pack option

By Ina Fried
Staff Writer, CNET News.com
Published: February 2, 2007, 2:33 PM PST

Microsoft said Friday that it is working to resolve a glitch that prevented some customers from taking advantage of the company's Vista family pack option.

The company is offering Vista Ultimate buyers in the U.S. and Canada the option of purchasing up to two additional licenses of Vista Home Premium for $50 apiece. However, some early takers on the offer got product license keys that did not work.

The software maker told CNET News.com Friday that it is aware of the issue and is working to fix things.

"New product keys are on the way as of this afternoon to the small group of customers who have been affected," a Microsoft representative said in an e-mail. "We've also taken steps to ensure that the issue is resolved going forward."

The family pack option, billed as a limited-time offer, is a new option with Vista. Amid much marketing fanfare, the Microsoft operating system went on sale to consumers on Tuesday.

Darryl Whitworth, of Duncan, British Columbia, said he bought Vista Ultimate primarily for the family pack option, but got an error message when he tried to buy the additional licenses.

"I am not sure if I would have purchased the (more expensive) Ultimate version if this offer had not been available," Whitworth said in an e-mail interview.

Ultimate carries a suggested price tag of $399, or $259 for those upgrading from Windows XP or Windows 2000, as compared to the Home Basic and Home Premium options, which range from $99 to $159 for the upgrade and $199 to $239 for the full version.

Enemy inside the firewall

ILP software and strategies help ensure information doesn't land in the wrong hands



By Roger A. Grimes


February 02, 2007

Corporate security lapses are once again sweeping the news hour, but these days the culprit is just as likely to be an inside source -- a paid employee at a reputable company -- as a hacker doing evil somewhere in a Moscow basement.

Pity poor Boeing, which made headlines in December after personal information including salaries, Social Security numbers, and home addresses of approximately 382,000 retired and current employees, was stolen. According to news reports, a thief made off with an employee’s laptop. Unfortunately, the laptop’s owner violated Boeing’s policy by failing to encrypt the data after it had been downloaded from a server. In an e-mail sent to Boeing employees, Jim McNerney, chairman, president, and CEO wrote, “This latest incident resulted from a clear violation of our data-protection policy.”


Click for larger view.
That wouldn’t surprise Brian Contos, CSO of security vendor ArcSight and author of Enemy at the Water Cooler: Real-Life Stories of Insider Threats and Enterprise Security Management Countermeasures. In the book, he notes, “Too often policies and procedures are outdated, forgotten, not well-communicated through awareness programs, or not even written.”

Financial liability aside, information leaks can disrupt corporate strategy and leave an embarrassing bruise. In January, full details about Cingular Wireless’s latest Palm Treo 750 were leaked to the Web a week before the announcement date. A sales presentation that was supposed to be embargoed until the big day instead made its premature debut on Engadget Mobile.

Such events are leading to a surge of interest in ILP (information leak prevention), which targets policy-compliance monitoring and enforcement pertaining to information on the desktop and all data that moves along the internal network and across the corporate boundary. “Maybe we were naïve, but until we installed PortAuthority at the beginning of 2006 we had no system for auditing [outbound] e-mail,” says Ron Uno, an IT manager at Kuakini Health System and a key player in an ongoing effort to be HIPAA-compliant. “It flags everything [suspicious].”

A Fortune 1000 CSO who asked that his name be withheld describes both the frustration and urgency bound with ILP. “If an employee goes against company policy and takes data home to be more productive, how would I know? Not a single person in any company knows where all the data is. And if you don’t know where the data is, how can you even begin to protect it?”

Here’s the plan

Protecting information assets will always be a challenge of the highest order, but there are specific tasks you can perform to decrease your risk.

The first step in the ILP process is to develop a data protection policy. Corporate security officers should evaluate their ILP threats and institute risk-appropriate solutions.

Company officials must first decide what information is important to keep confidential. How can the data be accessed? Who can access it? When? And for how long? Information must be assigned a value, using implicit and explicit costs. The relative threats and risks to it must be evaluated, and a cost-of-defense threshold developed. A determination must be made as to how much the company is willing to spend to protect its confidential information.

Defining the confidential and critical information, the risks to each type of information, and the value to the organization allows ILP planners to focus on mission-critical assets first. In short, a data-protection plan follows the same steps that an organization would take when developing a business continuity plan -- only the focus is different. In a business continuity or disaster recovery plan, the focus is on the infrastructure and processes, and what it takes to make a company’s mission-critical tasks operational again. A data protection policy is by contrast information-centric.



After the data-protection policy is developed, educating employees is the next order of business. Understanding and adhering to the policy should be part of the hiring package, and employees should know the consequences, for example, for taking home data without permission. Further, there are numerous policies to prevent information loss that leave users out of the loop, regardless of whether or not their intentions are malicious. One is to ensure that backup media is encrypted by default; another is to disable USB to prevent loss by way of flash memory drives. Whatever the policies are, they should be clearly communicated to staff and contractors in writing.

Information is power

Next, information stores and communication channels must be defined. IT must know where all the critical data is stored and how it’s communicated between hosts. Consider client computers, file servers, e-mail servers, print servers, and database servers. Information is often transmitted using HTTP and e-mail, but don’t forget instant-messaging channels or removable media such as DVDs, CD-ROMs, and USB flash drives.

Also consider third-parties if they store or have access to your data. Negotiating the right to inspect and audit their controls on a periodic basis can go a long way toward reducing risk. It’s wise to include a clause in your contract that they forfeit the job the minute they fail to ensure adequate controls.

After you’ve hypothesized where the information is, find it and monitor it. Several vendors make tools that look for confidential information. Some scan server and workstation hard drives looking for tell-tell signs of protected data. The use of predefined data formats such as XXX-XX-XXXX would be recognized as a Social Security number and send out the proper alerts, while others do the same listening on network connections.

PortAuthority, which was recently acquired by Websense, sells software called Precise ID. The software uses multiple detection methods to identify and classify structured or unstructured data, including rules, dictionaries, keywords, threshold counts, categories, lexicons, statistical analysis, and content-matching. It recognizes more than 370 file formats, including popular archival types such as .zip. Searches can be made on storage media (what PortAuthority calls “data at rest”) or while the data is in use.

Evaluate your options

Preventing data leaks requires a multipronged approach. Although no single product can do it all, many companies are buying ILP-specific technologies, such as those found in PortAuthority Technologies’ product line.

Securent CEO Rajiv Gupta, puts it best: “Companies are tasked with making their applications more broadly available to a wider range of customers, end-users, and partners, while at the same time making sure unauthorized access isn’t granted. If everyone is sharing the same data, it often takes a ‘Chinese Wall’ type of product to help keep users and data appropriately segregated to ensure compliance.”

Securent’s Entitlement Management Solutions attempt to keep internal and external parties away from data sources by focusing on authorization. Other vendor systems handle identity and authentication, but then administrators define user authorization policies to enforce who can see what. Securent’s products work by wrapping end-point applications in an application-level shim. Securent’s policy engine and enforcement points can provide additional granularity that the application or operating system itself cannot deliver.

PortAuthority Protector Appliances passively monitor network communications, looking for confidential data in e-mail, IM, file transfers, and Web postings. If protected content is detected, the information is dropped, the device or its port may be disconnected, and management is notified.

Tablus offers similar protection with its Content series of products. Tablus even comes with several built-in policies that understand what types of information fall under different compliance categories.

Other vendors are providing solutions that lock out inappropriate uses of the data. Microsoft’s RMS (Rights Management System) software encrypts protected data. The data owner or originator can decide what users and uses are valid. For example, data can be sent out to a select group -- some people on the mailing list can edit, print, and forward the data; others may be able only to view the data. Every time a protected data file is accessed, it must “phone home” to an RMS authentication server before the encryption is removed. An employee could be terminated, and even though the former worker has a copy of the document at home in an e-mail inbox, he or she may be unable to open it any longer.

Thin clients manufacturers say they’re seeing a rise in interest in sales as the stripped down machines (no CD-ROM, no USB ports, and so forth) are increasingly viewed as useful in improving overall security while working toward desktop consistency.

Several vendors offer delete-after-it-is-stolen solutions. Suppose a covered laptop containing confidential data is stolen. An administrator can tell a centralized control program that the laptop has been stolen. When the stolen laptop is plugged in and hooked to the Internet for the first time, a hidden client-side app dials home and gets the self-destruct order to format itself. The client-side software program is often configured to self-destruct if it hasn’t reached its server program in X number of days.

The call for ILP is being heard far and wide. Many CSOs are focusing on easy-to-see, high-risk data areas such as lost or stolen laptops. Most are buying encryption products for mobile computing devices and media as a matter of course. Most USB flash memory drives come with built-in encryption. Tape backup software is making its encryption option easy to find and enable. Encryption providers are seeing huge sales growth. OS vendors, such as Microsoft, are providing built-in disk or volume encryption products.

ILP is more than just another acronym; it should be high on your to-do list this year. Otherwise you might be hearing about your oversight on the evening news.

Users, analysts: No rush to adopt Exchange 2007

ew hardware requirements, compatability problems, and new architecture push many companies into a 'wait and see' approach to upgrading



By Elizabeth Montalbano, IDG News Service


February 02, 2007

Windows Vista isn't the only recently released Microsoft software that will give users headaches when they upgrade their systems. Corporate users, partners, and analysts said upgrading to Exchange Server 2007 from previous versions also may be a lengthy and painful process for companies, which may want to take a wait-and-see approach to the new software.

New hardware requirements, incompatibilities with other Microsoft software, and the complexity of the product's new architecture are just a few of the issues that will make a move to Exchange 2007 from Exchange 2003 or earlier versions costly and difficult for IT administrators, said Microsoft partners and analysts.

"There are about 6,000 pages of documentation that an IT administrator will have to wade through to deploy [Exchange Server 2007], said Keith McCall, a former Exchange director at Microsoft and now chief technology officer and founder of Azaleos, which offers an Exchange Server appliance and turnkey product. You need to think hard, and you need to plan your server infrastructure to add the value and new functionality of Exchange 2007."

Exchange 2007 is the first major update since Exchange 2003, and it is the first version of the software that runs only on 64-bit servers. Previously, Exchange ran on 32-bit servers, so customers will not be able to just switch out their current version of Exchange to a new one. They will be required to update the hardware.

To its credit, Microsoft alerted customers in November 2005 that this would be the case. But the 64-bit transition is not the only hardware headache associated with upgrading to Exchange 2007, McCall said.

Because of the various server "roles" Microsoft has introduced for Exchange Server 2007, the software can no longer be set up for high availability on two servers -- one for the roles and one for failover, he said. There are five different server roles for Exchange 2007 -- mailbox, client access, unified messaging, routing and hub transport, and edge transport, which filters e-mail before it hits the mail store.

"If you want to run all of the three primary Exchange 2007 roles (mailbox server, hub transport, and client access) with high availability, you need at least four servers, twice as many as you needed in Exchange 2003," he said. "If you want to add the unified message and edge transport roles, you need six servers."

If you don't want high availability, you can run all the roles of Exchange Server 2007 on one server, except for the edge transport server, which requires its own, according to Jeff Ressler, director of the Exchange Server Group at Microsoft. He said Microsoft created Exchange 2007 with these new roles clearly defined because it is easier for administrators to proactively choose which role they want Exchange to play in a network. Previously, they had to go in and turn off the roles they didn't want.

"In the new world, you would choose, 'I want that to be a client access server,' and just install those components," Ressler said.

Hardware is not the only pain point for customers upgrading to the new version of Exchange. Duncan Greatwood, CEO of PostPath, in Mountain View, Calif., said there are some incompatibilities between Exchange Server 2007 and Windows Vista, the new version of Windows that was released to business customers in November and to consumers earlier this week. PostPath has a mail and messaging server that runs on Linux and can connect seamlessly with Exchange Servers on a network.

A key problem is that the management tools for Exchange Server 2007 don't run on Vista, Greatwood said. Typically, an IT administrator will deploy Exchange and its management tools on a server and then deploy the tools on his or her own desktop machine so the server can be maintained from there, he said. However, if an administrator's desktop is running Vista, that desktop can't run the tools, Greatwood said.

PostPath developed its software by figuring out Exchange's proprietary network protocols and building them into its product to enable the migration, Greatwood said. Microsoft also licenses some of Exchange's networking protocols to third parties, Ressler said.

"If you have an Outlook desktop, in a completely unmodified way you can use PostPath just like Exchange," Greatwood said. "Outlook doesn't know it's talking to PostPath; it thinks it's talking to Exchange."

For some companies, this could be a good alternative if they don't want to undertake the task of upgrading to Exchange Server 2007, said Maurene Grey, founder, and principal analyst of Grey Consulting. She said the move from Exchange 2003, which many people are running, to Exchange 2007 is the same as the leap from Exchange 5.5 to 2000, when Microsoft made a similar overhaul in the product's architecture.

For some companies that are diehard Microsoft loyalists, however, the question of upgrading to Exchange Server 2007 is not one of if but when, Grey said.

"When an organization made their last major upgrade will determine to a great extent when they'll make this upgrade," she said. "If a company just spent $2 million two years ago to upgrade from 5.5 to 2003, the CFO isn't going to be that eager to give them the money to make this upgrade."

Hackers target hole in BrightStor

Exploit code hits Windows XP and Windows 2000 systems



By Paul F. Roberts, IDG News Service


February 02, 2007

Anti-virus firm Symantec warned today that exploit code is circulating for a known security hole in Computer Associates' BrightStor ARCServe Backup software, which provides data backup and restore for a variety of operating systems including Windows, Netware, Linux, Unix, and Mac.

Symantec issued an alert early Friday, after exploit code was posted to the Securityfocus Web site. The alert raised the urgency and severity of an earlier warning about the security holes in ARCServe Backup versions 9.01 through 11.5 SP1, as well as CA's Business Protection Suite software. The exploit code is designed to run on Windows XP and Windows 2000 systems.

The remote buffer overflow vulnerability in BrightStor was initially disclosed on January 12, when CA released a patch to fix the hole.

According to CA, the flaw results from insufficient bounds checking on user-supplied data. Attackers could trigger the overflow using specially crafted RPC (Remote Procedure Call) requests sent to TCP ports 6503 or 6504. Triggering a buffer overflow would allow attackers to run malicious code on the vulnerable system with administrative privileges, allowing them to take control of the vulnerable machine.

Backup software is a particularly attractive target for malicious hackers, because the systems -- by their nature -- store large volumes of data that can be accessed when the systems are compromised, said Max Caceras, director of product management at Core Security.

BrightStor customers are advised to apply the patch that fixes the vulnerability or to block external access to the BrightStor software or use IDS to spot attacks.