Monday, February 18, 2008

Advanced Encryption Standard, The Latest Encryption Algorithm

Advanced Encryption Standard, The Latest Encryption Algorithm



Advanced Encryption Standard (AES) is the latest encryption standard used to protect confidential information like financial data for government and commercial use. It is a stronger symmetric encryption algorithm that was approved by NIST (National Institute of Standards and Technology) to replace the Data Encryption Standard (DES) and Triple DES encryption algorithm. DES is arguably the most important and widely used cryptographic algorithm in the world. However, its usefulness is now quite limited after years of advances in computational technology. A DES key can now be easily cracked after several hours of number crunching. By using dedicated hardware, Electronic Frontier Foundation manages to break it in 22 hours (http://www.rsasecurity.com/rsalabs/des3/).

With that hindsight, NIST was commissioned to oversee the development of the next generation symmetric cryptographic algorithm called Advanced Encryption Standard (AES). On January 2, 1997, NIST announced the initiation of the AES development effort and made a formal call for algorithms on September 12, 1997. The call stipulated that the AES must implement symmetric key cryptography as a block cipher and (at a minimum) support block sizes of 128-bit and key sizes of 128-bit, 192-bit, and 256-bit. On October 2000, NIST had selected Rijndael (pronounced as Rain-Doll) algorithm to be the proposed AES due to its high security strength, computational and memory efficiency, high configurability and simplicity. It can be implemented in wide ranges of devices from low memory devices like smart card to high-end workstations. Rijndael is finalized as the AES standard in November 2001 as FIPS 197 (http://csrc.nist.gov/publications/fips/). It is a 128-bit (16 byte) block cipher with variable key sizes ranging from 128 bits to 256 bits. It offers much higher security strength as compared to the DES standard that supports only 56-bit keys.

Advanced Encryption Standard (AES), also known as Rijndael, is a block cipher adopted as an encryption standard by the U.S. government. It has been analyzed extensively and is now used worldwide, as was the case with its predecessor,[3] the Data Encryption Standard (DES). AES was announced by National Institute of Standards and Technology (NIST) as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001 after a 5-year standardization process (see Advanced Encryption Standard process for more details). It became effective as a standard May 26, 2002. As of 2006, AES is one of the most popular algorithms used in symmetric key cryptography. It is available by choice in many different encryption packages.

The cipher was developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, and submitted to the AES selection process under the name "Rijndael", a portmanteau of the names of the inventors. (Rijndael is pronounced [rɛindaːl], which sounds almost like "Rhine dahl")

Description of the cipher

Strictly speaking, AES is not precisely Rijndael (although in practice they are used interchangeably) as Rijndael supports a larger range of block and key sizes; AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with key and block sizes in any multiple of 32 bits, with a minimum of 128 bits and a maximum of 256 bits.

Due to the fixed block size of 128 bits, AES operates on a 4×4 array of bytes, termed the state (versions of Rijndael with a larger block size have additional columns in the state). Most AES calculations are done in a special finite field.

High-level cipher algorithm

* KeyExpansion using Rijndael's key schedule
* Initial Round
o AddRoundKey
* Rounds

1. SubBytes — a non-linear substitution step where each byte is replaced with another according to a lookup table.
2. ShiftRows — a transposition step where each row of the state is shifted cyclically a certain number of steps.
3. MixColumns — a mixing operation which operates on the columns of the state, combining the four bytes in each column
4. AddRoundKey — each byte of the state is combined with the round key; each round key is derived from the cipher key using a key schedule.

* Final Round (no MixColumns)

1. SubBytes
2. ShiftRows
3. AddRoundKey
The SubBytes step
In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit lookup table, S; bij = S(aij).
In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit lookup table, S; bij = S(aij).

In the SubBytes step, each byte in the array is updated using an 8-bit substitution box, the Rijndael S-box. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over GF(28), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), and also any opposite fixed points.

[edit] The ShiftRows step
In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left. The number of places each byte is shifted differs for each row.
In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left. The number of places each byte is shifted differs for each row.

The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. For the block of size 128 bits and 192 bits the shifting pattern is the same. In this way, each column of the output state of the ShiftRows step is composed of bytes from each column of the input state. (Rijndael variants with a larger block size have slightly different offsets). In the case of the 256-bit block, the first row is unchanged and the shifting for second, third and fourth row is 1 byte, 3 byte and 4 byte respectively - although this change only applies for the Rijndael cipher when used with a 256-bit block, which is not used for AES.

[edit] The MixColumns step
In the MixColumns step, each column of the state is multiplied with a fixed polynomial c(x).
In the MixColumns step, each column of the state is multiplied with a fixed polynomial c(x).

In the MixColumns step, the four bytes of each column of the state are combined using an invertible linear transformation. The MixColumns function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with ShiftRows, MixColumns provides diffusion in the cipher. Each column is treated as a polynomial over GF(28) and is then multiplied modulo x4 + 1 with a fixed polynomial c(x) = 3x3 + x2 + x + 2. The MixColumns step can also be viewed as a multiplication by a particular MDS matrix in Rijndael's finite field.

This process is described further in the article Rijndael mix columns.

[edit] The AddRoundKey step
In the AddRoundKey step, each byte of the state is combined with a byte of the round subkey using the XOR operation (⊕).
In the AddRoundKey step, each byte of the state is combined with a byte of the round subkey using the XOR operation (⊕).

In the AddRoundKey step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining each byte of the state with the corresponding byte of the subkey using bitwise XOR.

[edit] Optimization of the cipher

On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining SubBytes and ShiftRows with MixColumns, and transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables, which utilizes a total of four kilobytes (4096 bytes) of memory--one kilobyte for each table. A round can now be done with 16 table lookups and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the AddRoundKey step.

If the resulting four kilobyte table size is too large for a given target platform, the table lookup operation can be performed with a single 256-entry 32-bit table by the use of circular rotates.

Using a byte-oriented approach it is possible to combine the SubBytes, ShiftRows, and MixColumns steps into a single round operation.

[edit] Security

As of 2006, the only successful attacks against AES implementations have been side channel attacks. The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for US Government non-classified data. In June 2003, the US Government announced that AES may be used for classified information:

"The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use."[5]

This marks the first time that the public has had access to a cipher approved by NSA for encryption of TOP SECRET information. Many public products use 128-bit secret keys by default; it is possible that NSA suspects a fundamental weakness in keys this short, or they may simply prefer a safety margin for top secret documents (which may require security decades into the future).

The most common way to attack block ciphers is to try various attacks on versions of the cipher with a reduced number of rounds. AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.[6]

Some cryptographers worry about the security of AES. They feel that the margin between the number of rounds specified in the cipher and the best known attacks is too small for comfort. There is a risk that some way to improve such attacks might be found and then the cipher could be broken. In this meaning, a cryptographic "break" is anything faster than an exhaustive search, thus an attack against a 128-bit-key AES requiring 'only' 2120 operations (compared to 2128 possible keys) would be considered a break even though it would be, at present, quite infeasible. In practical application, any break of AES which is only that "good" would be irrelevant. At present, such concerns can be ignored. The largest successful publicly-known brute force attack has been against a 64-bit RC5 key by distributed.net.

Other debates centers around the mathematical structure of AES. Unlike most other block ciphers, AES has a very neat algebraic description.[7] This has not yet led to any attacks, but some researchers feel that basing a cipher on a new hardness assumption is risky. This has led Ferguson, Schroeppel, and Whiting to write, "...we are concerned about the use of Rijndael [AES] in security-critical applications."[8]

In 2002, a theoretical attack, termed the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, showing a potential weakness in the AES algorithm.[9] Several cryptography experts have found problems in the underlying mathematics of the proposed attack, suggesting that the authors may have made a mistake in their estimates. Whether this line of attack can be made to work against AES remains an open question. At present, the XSL attack against AES appears speculative; it is unlikely that the current attack could be carried out in practice.

[edit] Side channel attacks

Side channel attacks do not attack the underlying cipher and so have nothing to do with its security as described here, but attack implementations of the cipher on systems which inadvertently leak data. There are several such known attacks on certain implementations of AES.

In April 2005, D.J. Bernstein announced a cache timing attack that he used to break a custom server that used OpenSSL's AES encryption.[10] The custom server was designed to give out as much timing information as possible, and the attack required over 200 million chosen plaintexts. Some say the attack is not practical over the internet with a distance of one or more hops;[11] Bruce Schneier called the research a "nice timing attack."[12]

In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache timing attacks against AES.[13] One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system that is performing AES.

[edit] FIPS Validation

The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of validated cryptographic modules is required by the United States Government for all unclassified uses of cryptography. The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.

Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as 3DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.

FIPS validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of several days. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $10,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be resubmitted and reevaluated if they are changed in any way.

Wednesday, October 31, 2007

RSA '07: New threats could hamper traditional antivirus tools

An emerging breed of sophisticated malware is raising doubts about the ability of traditional signature-based security software to fend off new viruses and worms, according to experts at this week's RSA Security Conference in San Francisco.
Signature-based technologies are now "crumbling under the pressure of the number of attacks from cybercriminals," said Art Coviello, president of RSA, the security division of EMC. This year alone, about 200,000 virus variants are expected to be released, he said. At the same time, antivirus companies are, on average, at least two months behind in tracking malware. And "static" intrusion-detection systems can intercept only about 70 percent of new threats.
Read the latest WhitePaper - Frost & Sullivan Report - Competitive Advantage Today, Competitive Requirement Tomorrow

RSA '07 HQ: Click here for complete coverage

"Today, static security products are just security table stakes," Coviello said. "Tomorrow, they'll be a complete waste of money. Static solutions are not enough for dynamic threats."

What's needed instead are multilayered defenses -- and a more information-centric security model, Coviello said. "[Antivirus products] may soon be a waste of money, not because viruses and worms will go away," but because behavior-blocking and "collective intelligence" technologies will be the best way to effectively combat viruses, he said.

Unlike the low-variant, high-volume threats of the past, next-generation malware is designed explicitly to beat signature-based defenses by coming in low-volume, high-variant waves, said Amir Lev, president of Commtouch Software, an Israeli vendor whose virus-detection engines are widely used in several third-party products.

Until last year, most significant e-mail threats aimed for wide distribution of the same malicious code, Lev said. The goal in writing such code was to infect as many systems as possible before antivirus vendors could propagate a signature. Once a signature became available, such viruses were relatively easy to block.

New server-side polymorphic viruses threats like the recent Storm worm, however, contain a staggering number of distinct, low-volume and short-lived variants and are impossible to stop with a single signature, Lev said. Typically, such viruses are distributed in successive waves of attacks in which each variant tries to infect as many systems as possible and stops spreading before antivirus vendors have a chance to write a signature for it.
Storm had more than 40,000 distinct variants and was distributed in short, rapid-fire bursts of activity in an effort to overwhelm signature- and behavior-based antivirus engines, Lev said.
One example of such malware is WinTools, which has been around since 2004 and installs a toolbar, along with three separate components, on infected systems. Attempts to remove any part of the malware cause the other parts to simply replace the deleted files and restart them. The fragmented nature of such code makes it harder to write removal scripts and to know whether all malicious code has actually been cleaned off a computer.

New version of Storm virus infects blogs and other Web postings

A new version of the Storm e-mail virus is populating blogs and online bulletin boards with links directing people to a Web site that is propagating the worm, representing a new mode of attack for hackers seeking financial gain, according to a security vendor that became aware of the virus Monday night.
the Storm Worm attacks in December and January used infected e-mails to hijack personal computers and add them to “bot-nets,” networks of infected computers used by hackers to distribute spam and viruses.
Within the past day, a variation of this virus was found to be using infected computers to place malicious links on various Web sites, according to Secure Computing, a messaging security vendor based in San Jose, Calif.

If your computer is infected, the virus can add malicious text to any message you post to a blog or bulletin board. The text says, “Have you seen this?” and is followed by a URL containing the phrases “freepostcards” and “funvideo.”

“The new thing about this virus is the way it propagates. It’s basically filling up Web pages all over the Internet with links to the malware,” says Dmitri Alperovitch, principal research scientist for Secure Computing.

A Google search on Tuesday afternoon located 71 sites containing the link, including message boards hosted by the Salt Lake Tribune and a site about Australian pythons and snakes.

Clicking on the link causes the virus to be downloaded to the user’s computer. “It turns you into a zombie. Your computer is now under full control under the criminal that is in control of this bot-net,” Alperovitch says.
The virus is a rootkit that integrates fully into an operating system, so it scans traffic to and from your machine and could intercept your bank account information or other sensitive data. The bot-net can also be used to launch an attack against a Web site, effectively shutting the site down by flooding it with traffic from infected computers, Alperovitch states. Hackers sometimes launch these attacks so they can demand ransom money from Web site owners in exchange for stopping the attack, according to Alperovitch.
Some antivirus programs have trouble finding the virus, he says, but you can figure out if your computer is infected by posting to a blog or bulletin board and seeing if your message contains the malicious link.

Typically, though, a user will not realize he or she is infected, and people who read postings to blogs and bulletin boards may be fooled into thinking the link should be trusted.

“Because they’re not looking to destroy data on your machine, you may not realize until much later that anything is happening,” Alperovitch says.

Gathering 'Storm' Superworm Poses Grave Threat to PC Nets

The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.

Although it's most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm, and I've seen estimates that between 1 million and 50 million computers have been infected worldwide.

Old style worms -- Sasser, Slammer, Nimda -- were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.

Worms like Storm are written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.

Storm represents the future of malware. Let's look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. By only allowing a small number of hosts to propagate the virus and act as command-and-control servers, Storm is resilient against attack. Even if those hosts shut down, the network remains largely intact, and other hosts can take over those duties.

3. Storm doesn't cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect, because users and network administrators won't notice any abnormal behavior most of the time.

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn't have a centralized control point, and thus can't be shut down that way.

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn't show up as a spike. Communications are much harder to detect.

One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won't work with Storm: An infected host may only know about a small fraction of infected hosts -- 25-30 at a time -- and those hosts are an unknown number of hops away from the primary C2 servers.

And even if a C2 node is taken down, the system doesn't suffer. Like a hydra with many heads, Storm's C2 structure is distributed.
5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called "fast flux." So even if a compromised host is isolated and debugged, and a C2 server identified through the cloud, by that time it may no longer be active.

6. Storm's payload -- the code it uses to spread -- morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm's delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites -- anything to entice users to click on a phony link. Storm also started posting blog-comment spam, again trying to trick viewers into clicking infected links. While these sorts of things are pretty standard worm tactics, it does highlight how Storm is constantly shifting at all levels.

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. There are always new subject lines and new enticing text: "A killer at 11, he's free at 21 and ...," "football tracking program" on NFL opening weekend, and major storm and hurricane warnings. Storm's programmers are very good at preying on human nature.

9. Last month, Storm began attacking anti-spam sites focused on identifying it -- spamhaus.org, 419eater and so on -- and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy's reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can't imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn't work in any case: Storm's creators could easily design another worm -- and we know that users can't keep themselves from clicking on enticing attachments and links.

Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it's a really bad idea in real life. We simply don't know how to stop Storm, except to find the people controlling it and arrest them.

Unfortunately we have no idea who controls Storm, although there's some speculation that they're Russian. The programmers are obviously very skilled, and they're continuing to work on their creation.

Oddly enough, Storm isn't doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.

Personally, I'm worried about what Storm's creators are planning for Phase II.

- - -

Bruce Schneier is CTO of BT Counterpane and author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World.

Tuesday, August 28, 2007

second life

TV Station¿ Television Broadcasting in Second Life

Posted Jul 31, 2007

TV Station produces movie (machinima) for your advertising and also promote it via our SL Television. We have a large complete broadcasting studio so that we can produce various kinds of Machinima such as SL news reporting, talk show, concert, and island presentation. If you have your own machinima, you can also promote it via SL Television. See SLnews - www.slnews.tv See TV Station - http://slurl.com/secondlife/TVstation/172/84/22

new animation movie out

Bee Movie - Trailer

Posted Jul 18, 2007

Barry B. Benson is a graduate bee fresh out of college who is disillusioned at his lone career choice: making honey. On a rare trip outside the hive, Barry's life is saved by Vanessa, a florist in New York City.

Wikipedia:10 things you did not know about images on Wikipedia


This image is free, so I can use it here and you can use it too!
10 things you did not know about images on Wikipedia is a list of insights about Wikipedia specifically targeted at people who have limited foreknowledge about images on the project, such as new editors and new readers. These explanations should not surprise experienced editors, but hopefully will help the rest of the world to shape an informed opinion of our work and understand why sometimes it seems we do not have an "easy to get image" of something.
1. We want images.
However, we primarily want freely licensed images which are compatible with our policies and goal of creating a free resource for everyone. If a picture is worth a thousand words, a free one is giving a thousand words to everyone who wants to use it and will ever see it; a non-free one only gives visitors to this single website a thousand words.
We depend on people like you to author and contribute images for Wikipedia, and the rest of the world, to use, as long as you are willing to release the images under a free license.
When we say freely licensed we're talking about the freedom of use the public has with images, not the price.
More information: Wikipedia:Requested pictures, The Definition of Free Content
1. We want usable images.
Please do not upload images that should not or can not be used in an article. While we permit a limited amount of images for users to use on their user page, we do not need a 9th image of your Jack Russell Terrier on that article. The Wikimedia Foundation is not a free webspace for your images. Please use a website designed for this.
2. We have 750,300 images.
We reached 1 million images on all Wikipedia projects in July of 2006. There are over 1.75 million images on Commons. On the English Wikipedia we have over 750,000 images. You can help Wikipedia by going through images and looking for problems such as missing source and licenses. You can nominate images for deletion via IFD. You can cleanup images too; see Wikipedia:Images for cleanup.
More information: Special:Statistics, Commons:Special:Statistics, Special:Unusedimages.
3. All images uploaded must have a source and license.
Failure to provide a source (who made it) and license (how it can be used) will result in the image being deleted, possibly as soon as 48 hours. You must provide this information for all images uploaded. No ifs, ands, or buts about it. You must provide this information for all images you upload, with no exceptions, or the images will be deleted. Blatant and completely unjustifiable violations of copyright law and our image policies can be (and are) deleted almost immediately.
We have a long term mission to create and promote content which is free of the typical encumbrances of copyright law. This mission requires us to take copyright very seriously. Unlike (most) other websites that allow user submission and generation of content, we aggressively remove all copyright infringements as soon as we can find them and block people who willfully ignore this after being warned.
1. Because free content is a fundamental part of our mission our policy on image licensing is more restrictive than required by law.
More information: Wikipedia:Non-free content
4. Use non-free images only when nothing else is possible.
Do not go to the nearest website and grab an image of a person/place/building. It is extremely likely that image is both copyrighted and fails our Non-free content policy, which states that a non-free image may be used only when it cannot be replaced. For example, there's no way whatsoever that a logo of a political party or a screenshot of a video game can be replaced by a free image, but a photo of someone or a certain location can almost always be replaced, even if doing so may be very difficult. Search for free images, especially for living persons, existing buildings and places. Don't upload an image just because the article doesn't have one right now; we can (and will) wait for a free image to be created or released. If you are going to upload a non-free image, see Wikipedia:Non-free content criteria first. Wikipedia:Fair use rationale guideline will be helpful as well.
More information: Wikipedia:Non-free content criteria #1
5. Non-commercial, educational-only and non-derivative images are NOT "free" images.
All such restrictions unacceptably limit how other people may use the image off of Wikipedia, which completely contradicts the entire point of "free images" and "Free content" (See above). These licenses are not valid at all and such images must be justified as "fair use" or they will be immediately deleted.
6. No one has a perfect understanding of the copyright law.
Even if you are licensed attorney who practices in this area, US copyright law (which applies to Wikipedia) is complex, and while an understanding of how it applies to Wikipedia may be achievable, there is a considerable gray area and deciding the status of one image in a complex situation can be very difficult, if not impossible, at times.
7. We have an image use policy.
Once an image is uploaded and correctly sourced and licensed, it may then be used in articles. See Wikipedia:Image use policy, which describes the accepted ways of displaying, formatting, etc. images. If you use images in an article, you should be familiar with it. Example: Did you know ... that the largest an image can be displayed in an article is 550 pixels wide?
8. Ideally, there would not be any images stored on Wikipedia, they would all be on Wikimedia Commons.
Because we want free content, all images uploaded would ideally be free for everyone and therefore would be acceptable on our sister project, Wikimedia Commons. Images submitted to Commons are automatically available here... and on hundreds of other Wikis run by the Wikimedia Foundation. If you're looking for an image for an article, be sure to search using the commons Mayflower search.
More information: Wikipedia:Free image resources
9. Uploading the same image 8 times is not needed.
You can edit the image page! Just like every other page on Wikipedia, the image description page can be edited by anyone. Just click "edit this page" while looking at the image page. Forgot to license or give the source for the image when you uploaded it? Do not re-upload the image: edit the image description page and add the license!
Also, the wiki software can control the size of the images, so you do not need to re-upload a smaller version of the same image. See Wikipedia:Extended image syntax. There, you can learn how to use frames, control the position in the article and about captions! For more on captions, see Wikipedia:Captions.
10. You can use (free) images on Wikipedia yourself, anywhere you like.
You can use images that are on Commons and free images on Wikipedia provided you comply with the individual image license terms, not (necessarily) the GFDL. While all articles text are licensed under the GFDL, free images have several licenses to choose from. See Wikipedia:Image copyright tags/Free licenses for the many possibilities. You can use them on any page on Wikipedia. You can even use them OFF Wikipedia, such as on a website, printed material, anywhere! All "Free" image licenses allow these uses.
1. You cannot use non-free images anywhere except in relevant encyclopedia articles.
Non-free images can only be used in the article namespace where they have a rationale for existing. You cannot use them anywhere else, such as on policy pages, discussion pages, templates, or user communication pages. If you need to discuss them, link to them by putting a colon (:) between the "[[" and "Image:" like this: [[:Image:Imagenamegoeshere.ext]]

More information: Wikipedia:Non-free content criteria #9
[edit] Other media
All the above applies to audio and video too.
We allow other forms of media, such as audio and even video. The same rules apply for these media as they do for images.

Wikipedia:10 things you did not know about Wikipedia

10 things you did not know about Wikipedia is a list of insights about Wikipedia specifically targeted at people who have limited prior experience with the project, such as journalists, new editors, and new readers. These explanations should not surprise experienced editors, but hopefully will help the rest of the world to shape an informed opinion of our work.
More information: http://en.wikipedia.org/wiki/Wikipedia:About
1. We are not for sale.
If you are waiting for Wikipedia to be bought by your friendly neighborhood Internet giant, do not hold your breath. Wikipedia is a non-commercial website run by the Wikimedia Foundation, a 501(c)(3) non-profit organization based in St. Petersburg, Florida. We are supported by donations and grants, and our mission is to eventually bring free knowledge to everyone.
More information: http://wikimediafoundation.org/
2. Our work can be used by everyone, with a few conditions.
Wikipedia has taken a cue from the free software community (which includes projects like GNU/Linux and Mozilla Firefox) and done away with traditional copyright restrictions on our content. Instead, we have adopted what is known as a "free content license" (specifically, the GFDL): all text and composition created by our users are and will always remain free for anyone to copy, modify, and redistribute. We only insist that you credit the contributors, and that you do not impose new restrictions on the work or any improvements you make to it. Many of the images, videos, and other media on the site are also under free licenses, or in the public domain. Just check a file's description page to see its licensing terms.
More information: http://en.wikipedia.org/wiki/Wikipedia:Copyrights
3. We speak Banyumasan…
…and about 250 other languages. Granted, only about 60 of those Wikipedia language editions currently have more than 10,000 articles — but that is not because we are not trying. Articles in each language are generally started and develop differently from their equivalents in other languages, although some are direct translations, which are always performed by volunteer translators, and never by machines. The Wikimedia Foundation is supported by a growing network of independent chapter organizations, already in seven countries, which help us to raise awareness on the local level. In many countries, including the United States, Wikipedia is among the ten most popular websites.
More information: http://meta.wikimedia.org/wiki/List_of_Wikipedias and http://www.alexa.com/data/details/traffic_details?q=&url=wikipedia.org
4. You cannot actually change anything in Wikipedia...
…you can only add to it. Wikipedia is a database with an eternal memory. An article you read today is just the current draft; every time it is changed, we keep both the new version and a copy of the old version. This allows us to compare different versions, or restore older ones as needed. As a reader, you can even cite the specific copy of an article you are looking at. Just link to the article using the "Permanent link" at the bottom of the left menu, and your link will point to a page whose contents will never change. (However, if an article is deleted, you cannot view a permanent link to it unless you are an administrator.)
More information: http://en.wikipedia.org/wiki/Wiki
5. We care deeply about the quality of our work.
Wikipedia has a complex set of policies and quality control processes. Editors can patrol changes as they happen, monitor specific topics they know about, follow a user's track of contributions, tag articles with problems for other editors to work on, report vandals, discuss the merits of each article with other users, and a lot of other things. Our best articles are awarded "featured article" status, and problem pages are nominated for deletion. "WikiProjects" focus on improvements to particular topic areas. Really good articles may go into other media and be distributed to schools through Wikipedia:1.0. We care about getting things right, and we never stop thinking about new ways to do so.
More information: http://en.wikipedia.org/wiki/Wikipedia:Community_Portal, http://en.wikipedia.org/wiki/Wikipedia:Why_Wikipedia_is_so_great, http://en.wikipedia.org/wiki/Wikipedia:Attribution, http://en.wikipedia.org/wiki/Wikipedia:Verifiability
6. We do not expect you to trust us.
It is in the nature of an ever-changing work like Wikipedia that, while some articles are of the highest quality of scholarship, others are admittedly complete rubbish. We are fully aware of this. We work hard to keep the ratio of the greatest to the worst as high as possible, of course, and to find helpful ways to tell you what state an article is currently in. Even at its best, Wikipedia is an encyclopedia, not a primary source, with all the limitations it entails. We ask you not to condemn Wikipedia, but to use it with an informed understanding of what it represents. Also, as some articles may contain errors, please do not use Wikipedia to make important decisions.
More information: http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer
7. We are not alone.
Wikipedia is part of a growing movement for free knowledge that is beginning to permeate science and education. The Wikimedia Foundation directly operates eight sister projects to the encyclopedia: Wiktionary (a dictionary and thesaurus), Wikisource (a library of source documents), Wikimedia Commons (a media repository of more than one million images, videos, and sound files), Wikibooks (a collection of textbooks and manuals), Wikiversity (an interactive learning resource), Wikinews (an experiment in citizen journalism), Wikiquote (a collection of quotations), and Wikispecies (a directory of all forms of life). Like Wikipedia itself, all these projects are freely licensed and open to contributions.
More information: http://wikimediafoundation.org/wiki/Our_projects
8. We are only collectors.
Articles in Wikipedia are not signed, and contributors are unpaid volunteers. Whether you claim to be a tenured professor, use your real name or prefer to remain without an identity, your edits and arguments will be judged on their merits. We require that sources be cited for all significant claims, and we do not permit editors to publicize their personal conclusions when writing articles. Editors must follow a neutral point of view; they must only collect relevant opinions which can be traced to reliable sources.
More information: http://en.wikipedia.org/wiki/Wikipedia:Five_pillars, http://en.wikipedia.org/wiki/Wikipedia:Verifiability
9. We are not a dictatorship nor any other political system.
The Wikimedia Foundation is controlled by its Board of Trustees, the majority of whom the Bylaws require to be chosen from its community. The Board and Wikimedia Foundation staff does not take a role in editorial issues, and projects are self-governing and consensus-driven. Wikipedia founder Jimmy Wales occasionally acts as a final arbiter on the English Wikipedia, but his influence is based on respect, not power; it takes effect only where the community does not challenge it. Wikipedia is transparent and self-critical; controversies are debated openly and even documented within Wikipedia itself when they cross a threshold of significance.
More information: http://en.wikipedia.org/wiki/Criticism_of_Wikipedia, http://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not
10. We are in it for the long haul.
We want Wikipedia to be around at least a hundred years from now, if it does not turn into something even more significant. Everything about Wikipedia is engineered towards that end: our content licensing, our organization and governance, our international focus, our fundraising strategy, our use of open source software, and our never-ending effort to achieve our vision. We want you to imagine a world in which every single human being can freely share in the sum of all knowledge. That is our commitment — and we need your help.
More information: http://wikimediafoundation.org/

Monday, August 27, 2007

COMDEX Las Vegas 2003 to Examine Latest In IT Security; Sessions to Focus on Security Strategies for Today's Heterogeneous Business Environments

Business Editors/High-Tech Writers

COMDEX Las Vegas 2003

SAN FRANCISCO--(BUSINESS WIRE)--Oct. 31, 2003

MediaLive International, Inc., producer of the world's best-known events, related media and marketing services for technology buyers and sellers, announced today that security will be a key focus of COMDEX Las Vegas 2003, November 16-20, 2003. Through a series of discussion panels, educational tutorials and hands-on demonstrations, COMDEX will highlight trends in current security practices for organizations of all sizes, the costs and benefits of each, as well as what the future holds, including wireless security, secure Web services, spam reduction and biometrics.

"This year's attendees will agree that COMDEX is the best place for the IT buying community to immerse themselves in mission-critical topics such as ensuring the security of their organization's intellectual property, Web domains, email systems, software systems and wireless infrastructure," said Eric Faurot, vice president and general manager of COMDEX. "With emerging technologies and standards such as Web services, Wi-Fi and Linux, it is critical that technology buyers understand security in the context of their overall architecture -- only COMDEX provides the broad-based, neutral platform required for that scale of interaction."
Advertisement

Security leaders and emerging companies who will be involved in COMDEX Las Vegas 2003 include: Cerberian; Computer Associates; Counterpane Internet Security; Cryptography Research; McAfee; Microsoft; Nokia; SonicWALL; Symantec Corp.; and more. The conference will showcase solutions for the key security issues businesses face including securing wireless networks, solutions for fighting Spam, how to securely deploy Web services and open-source software, authentication, virus protection and biometrics.

COMDEX Security Education Sessions

The COMDEX security conference will be anchored by a keynote address from Symantec Corp. CEO John W. Thompson, at 9 AM, Wednesday, November 19, in room C5 of the Las Vegas Convention Center.

In addition, the event will feature a Security Power Panel, "How Much Security is Enough?" at 11 AM on Monday, November 17, in room N109. Moderated by Tom Standage, technology correspondent for The Economist, the panel will focus on what businesses need for effective, efficient security. Panel participants include Christian Byrnes, vice president and service director for the META Group; Ben Golub, Verisign's senior vice president of Security, Payments and Managed Security; Dan MacDonald, vice president, Nokia; Ron Moritz, chief security strategist for Computer Associates; and Bruce Schneier, CTO and founder of Counterpane Internet Security.

Continuous security-related presentations will take place at the COMDEX Security Innovation Center, where experts from SonicWALL, McAfee, Cerberian and the ASCII group will share their knowledge and oversee attendees participating in hands-on, business security challenges.

Session topics at COMDEX's Security Conference include:

-- Deploying Wireless LANs Securely

-- Intrusion Prevention Systems

-- Security Policy

-- Where Hardware Security Meets Software Security - Weak Points

and Real Attacks

-- Identity Management - The Future of Security

-- Making Sense out of Web Services Security

-- Dealing With Spam

-- Antivirus Measures: Critical Business Process or Costly

Overhead?

-- Web Threats and Countermeasures

-- Securing Microsoft IIS

-- Choosing the Authentication Method that's Right for Your

Organization

-- The Promise of Biometric Technologies

-- Deploying Biometrics in Your Workplace

COMDEX also is offering five specific, full-day security tutorials, November 16-17, each concentrating on a different issue, such as defeating junk mail, usage of Secure Sockets Layer, intrusion detection and a live, hacking demonstration.

COMDEX Las Vegas 2003 focuses on IT in the B2B marketplace and covers seven core technology themes: Linux and Open Source, Wireless and Mobility, the Digital Enterprise, Web Services, Windows Platform, On-Demand Computing and Security. Together, these themes represent the fastest growing areas of technology advancement that will drive the majority of market innovation in support of user needs in 2003 and beyond. For further information about the extensive security tracks COMDEX Las Vegas offers this year, visit www.comdex.com.

How to Register for COMDEX

Online registration is available immediately at www.comdex.com/lasvegas2003/register, or by calling toll free at 888-508-7510 or 508-743-0186 (outside the U.S.). Call center hours are Monday-Friday 7 AM - 5 PM, Pacific. Closed Saturday and Sunday.

About COMDEX

Part of the MediaLive International, Inc. family of global brands, COMDEX hosts educational forums, events and conferences that focus on the technology areas most critical to today's IT buyer. COMDEX fosters ongoing collaboration, communication and commerce for the $879 billion IT market by connecting IT vendors with decision makers in Global 2000 companies. Upcoming regional events include COMDEX Sweden 2004, January 23-25, in Goteborg; COMDEX Saudi Arabia 2004, March 14-17, in Jeddah; and COMDEX Canada 2004, March 24-26, in Toronto.

About MediaLive International, Inc.

MediaLive International is producer of the world's best-known events, related media and marketing services for technology buyers and sellers. MediaLive International's products and services encompass the IT industry's largest exhibitions, including COMDEX and NetWorld+Interop, such highly focused educational programs as BioSecurity and Next Generation Networks, custom seminars including JavaOne, respected publications including Business Communications Review, and specialized vendor marketing programs. Created in 2003 from the assets of Key3Media, MediaLive International is a privately held company headquartered in San Francisco, with offices throughout the world. For more information about MediaLive International, visit www.medialiveinternational.com.

MediaLive International, COMDEX, NetWorld, NetWorld+Interop, Next Generation Networks, Business Communications Review, BioSecurity and associated design marks and logos are trademarks or service marks owned or used under license by MediaLive International, Inc., and may or may not be registered in the United States and other countries. Other names mentioned may be trademarks or service marks of their respective owners.

COPYRIGHT 2003 Business Wire
COPYRIGHT 2003 Gale Group

Thursday, August 23, 2007

Skype recounts tale of 'perfect storm' outage

It was a dark and stormy upgrade, and it won't happen again they say

Peter Sayer

August 21, 2007 (IDG News Service) -- The situation that prevented millions of people from accessing Skype Ltd.'s Internet telephony service late last week was a "perfect storm" and should not reoccur, the company said Tuesday.

The company initially attributed the problem, which began on Aug. 16, to the near-simultaneous rebooting of millions of computers, as Skype users running the Windows operating system attempted to reconnect to the service after downloading a series of routine software patches from Microsoft Corp.'s Windows Update service.

Skype's service relies on some of its users' computers to act as "supernodes," routing traffic for other, less well-connected, users. But as Skype customers tried to reconnect, many of those supernodes were themselves in the process of rebooting. The remaining supernodes were soon overwhelmed because a bug in the company's software did not efficiently allocate the network resources available.

Users were skeptical of this explanation. Microsoft regularly issues patches that may cause Windows computers to reboot, and they haven't caused problems for Skype before. Microsoft releases software updates on the second Tuesday of each month, a day known to systems administrators as "Patch Tuesday."

Skype spokesman Villu Arak offered a more detailed explanation of Skype's outage on Tuesday: Last week's problems were the result of a "perfect storm" of exceptionally high traffic through the service at the same time as the Windows Update process led to a shortage of supernodes in the service's peer-to-peer network.

The company did not offer an explanation for the high traffic, but accepted full responsibility for the software problem.

"Skype and Microsoft engineers went through the list of patches that had been pushed out," Arak wrote. "We ruled each one out as a possible cause for Skype's problems. We also walked through the standard Windows Update process to understand it better and to ensure that nothing in the process had changed from the past (and nothing had)."

The catastrophic effect on Skype's service was entirely Skype's fault -- a result of its software being unable to deal with simultaneous high load and supernode rebooting, according to Arak.

On Aug. 17, the day after the problems began, Skype released a new version of its software client for Windows to correct the problem. That update should behave better the next time high traffic coincides with a scarcity of supernodes, he said.

Skype had updated versions of its software client for Windows, Mac and Linux since July's patch Tuesday and before last week's outage, but the changes made in those updates were not responsible for the problem, according to company spokeswoman Imogen Bailey.

Reprinted with permission from

idg.net
Story copyright 2006 International Data Group. All rights reserved.

Zink Imaging LLC's Zink: Inkless Photo Printing


Mark Hall

August 20, 2007 (Computerworld) -- Zink Imaging LLC Zink

Johannes Gutenberg might have judged the folks at Zink Imaging LLC as heretics. After all, they have removed ink from the publishing process, eliminating what has been a fundamental element of printing since the first Bible rolled off the press in 1456. Instead, the scientists at the Waltham, Mass., firm focused their genius on the other key part of the printing paradigm — paper.

“The magic is in the paper,” says Stephen Herchen, chief technology officer at Zink, which is short for “zero ink.”

Herchen says Zink started as a project inside Polaroid Corp. in the 1990s before the storied camera company spun out Zink as a fully independent entity in 2005. The technology invented at Polaroid and perfected by Zink uses millions of colorless dye crystals layered under polymer-coated paper, making the prints durable enough for long-lasting photos. When the crystals are heated at different temperatures at specific intervals, they melt onto the paper in the traditional cyan, magenta, yellow and black used by ink-jet, laser and other printing devices. At the Demo Conference in Palm Desert, Calif., earlier this year, Herchen showed this reporter how it worked by lighting a match and holding it under the sample blank white paper to get the crystals to melt into a rainbow of colors.

Wow Factor
Inkless Photo Printing



Dye crystals embedded in special paper become colored when a printhead heats and activates them.

According to Zink’s CTO, like a lot of scientific teams involved in breakthrough projects, the 50 chemists and physicists involved at Polaroid and then Zink went through a long process of trial and error to create the right combination of molecules that could be controlled on the paper. And the final result had the look and feel of a regular photograph, he says.

Another upside is that consumers will no longer have to dispose of environmentally iffy ink cartridges, Herchen says. And pricing will be less than $2 per 10-sheet pack, the company says.

IDC analyst Ron Glaz says the technology is certainly innovative, but he says not to expect it to replace a desktop or network printer anytime soon. “It’s a niche product for a niche market,” he says.

Scott Wicker, Zink’s chief marketing officer, doesn’t disagree. What sets Zink apart is that it “enables printing where it doesn’t currently exist,” he says, explaining that without the need for ink cartridges or ribbons, printers can now be built into small, mobile devices such as digital cameras.

Wicker says the company will control the manufacture of the Zink printer paper, but partners will build and distribute printer products, an approach Glaz says could improve the likelihood of Zink’s success. Wicker adds there are no restrictions on the paper size that Zink can produce, although he says that the first paper to ship with products late this year will be 2 by 3 inches.

Cleversafe's Dispersed Storage

Robert L. Scheier

August 20, 2007 (Computerworld) -- Cleversafe Inc.

Cleversafe Dispersed Storage Unique algorithms disperse data over the Internet to servers on a grid

After selling his music services company MusicNow to Circuit City Stores Inc. in 2004, Chris Gladwin took a break to organize his own music and photos. It was then that Gladwin realized the usual method — storing multiple copies of data — was complicated and expensive.

A longtime inventor with an interest in cryptography, Gladwin developed algorithms to securely split and save data among multiple nodes and reassemble it when needed. That November, he founded Cleversafe Inc. to commercialize his work. Now a 29-person company, with Gladwin serving as president and CEO, Cleversafe is funded by more than $5 million from Gladwin and other early employees as well as “angel” and venture investors.

Wow Factor
Slice-and-Dice Storage

Unique algorithms slice, scramble, compress and disperse data over the Internet to servers on a grid.


Cleversafe’s Dispersed Storage software splits data into 11 “slices” of bytes, each of which is encrypted and stored on a different server across the Internet or within a single data center. This approach provides security, says Gladwin, because no one slice contains enough information to reconstitute any usable data. The self-healing grid provides up to 99.9999999999% reliability because data can be reconstituted using slices from any six nodes. Scalability is ensured, Gladwin says, because adding more storage requires merely adding servers to the grid or storage to the existing servers.

Among the biggest cost savers, says Gladwin, was the reduction in total storage needs achieved by eliminating the need for separate copies for backups, archives or disaster recovery. Compared with ratios of 5-to-1 or 6-to-1 of “extra” vs. original data in copy-based storage environments, Cleversafe requires ratios of 1.3-to-1 or less. While Gladwin has no specifics on how his software will be priced, he says customers should see savings “at least proportional” to the reduction in total stored data.

Originally, the team thought in terms of gigabytes of data to be stored. “Now,” says Gladwin, “we think in terabytes and even occasionally petabytes.” He says the first target will be secondary storage, where Dispersed Storage could replace tape and optical drives for backup and archiving.

This approach could “completely change the way storage administrators conduct their daily operations,” says John Webster, an analyst at Illuminata Inc. n Scheier is a freelance writer in Boylston, Mass. Contact him at rscheier@charter.net.

Ghost Inc.'s Ghost: The Everywhere OS

Gary Anthes
ugust 20, 2007 (Computerworld) -- Ghost Inc. Ghost

Ghost is founded on the passionate belief that the Windows and Mac model of your operating system — with your precious applications and data all walled inside one physical computer — is obsolete,” says Ghost’s creator, Zvi Schreiber.

The Global Hosted Operating System, or Ghost, is the logical next step in a trend to move applications and files from client computers to the Internet, says Schreiber. It is a Web-hosted image of your desktop or laptop — a virtual computer that can be accessed by any client device via a Web browser.

Ghost doesn’t require software upgrades or patches for user machines, and it’s always backed up. But its key selling point is the mobility and device-independence it offers users, says Schreiber, CEO of start-up Ghost Inc. in New York. “Young people do a lot of computing at school, and business people don’t want to carry their laptops everywhere,” he says. “People want to get their computing environment from anywhere.”

Wow Factor
The Everywhere OS

Free PC environment can be accessed from any browser, with single online file system, single log-in and file sharing.


Offered as a set of application services inside a virtual computer, Ghost is free for users. Schreiber says revenue will come from vendors who remit fees to the company when they sell products or services to Ghost users.

Ghost is in an alpha, “open to the public” release, Schreiber says, and it’s available at http://g.ho.st/home. “We don’t feel we are offering a complete service by any means, mainly in terms of the number of applications that are nicely integrated into Ghost,” he says. “But that’s changing pretty rapidly. By the third quarter, it will be a beta [release]. Not perfect, but really usable.”

Ghost users can’t use client-based applications like Microsoft Word or Excel, but they can use Web-based alternatives such as Google Docs & Spreadsheets. Schreiber says over the next year or so, he will seek partners to create Web-hosted versions of all popular desktop programs and help users migrate their data to them.

Rick Boyd, a Catholic priest and self-described “computer nerd” in Park Rapids, Minn., says he uses Ghost to host and manage his bookmarks, files and documents. “I use it every day, and I find it very convenient,” he says. “I believe that Web-based applications are the future.”

Boyd says he likes being able to access his bookmarks from any computer, no matter where he might be. “But it’s not just a bookmark manager,” he says, “it’s a virtual desktop, and that’s very innovative.”

Ghost is built from OpenLaszlo, an open-source platform for the development and delivery of Web applications that have the appearance and functions of traditional desktop applications. Ghost is hosted by Amazon Web Services.

Despite the use of OpenLaszlo and Amazon, Ghost developers still had to write a fair amount of software and do considerable systems integration work, Schreiber says.

“We had to think about the architecture very carefully to make it scalable, robust and secure,” he says. Scalablility was enhanced by pushing some of the processing and memory use from the server to clients, Schreiber adds.

10 Cool Cutting-Edge Technologies on the Horizon Now

Eleksen Group's wearable gadgetry kicks off this year's Horizon Award winners

Robert Mitchell

August 20, 2007 (Computerworld) -- Eleksen Group PLC Sideshow Wearable Display Module

It started as an idea for making more life-like puppets for the British TV show Spitting Image. Four years later, Eleksen Group PLC is hoping that its interactive textile technology will form the foundation for a new generation of wash-and-wear computer control and display devices.

The centerpiece of the technology is ElekTex, a fabric-based, pressure- sensitive control interface that can be integrated into jackets, bags and other textile products. The technology is already used as a remote control for iPods and cell phones in backpacks and coats. At this year’s Consumer Electronics Show in Las Vegas, Eleksen presented its latest design concept, which integrates ElekTex fabric controls with an LCD display that can interact with Windows Vista’s Sideshow feature. The latter exports information from a Vista laptop to a secondary display. Mini-applications, or “gadgets,” written for Sideshow can then wirelessly deliver e-mail, alerts or other updates to the remote screen even if the laptop remains in its case and turned off. Fabric-based controls and embedded control electronics interact with the display. Iver Heath, England-based Eleksen is also planning support for secondary displays on the Macintosh.

Wow Factor
Wearable Gagdetry



Schedules and recent e-mails are viewed without powering up a laptop, through a fabric-embedded module.


Initial implementations of ElekTex will likely be integrated into laptop bags with embedded button controls and small color LCD displays, says John Collins, vice president of marketing and business development at Eleksen. However, Collins envisions an eventual move to flexible displays based on color organic LED technology. That would allow the control and display surfaces to be embedded on any fabric surface, including a shirt. “Imagine receiving critical information from enterprise information systems on your sleeve,” says Vassilis Seferidis, vice president of product management.

ElekTex fabrics are constructed from woven layers of nylon and carbon-impregnated nylon that’s not only bendable, but also washable. Because of the nature of the material, it can be sewed, glued or even heat-welded into other fabrics. Mark Treger, national sales manager at Goodhope Bags Inc. in Chino, Calif., has embedded ElekTex sensors into backpacks to control iPods. “You can just sew through it. It just works,” he says. The one limitation is cost. Collins estimates that a laptop bag with the technology would cost about $200. But Treger says the cost of the ElekTex technology has already dropped by 50% in the past year. He sells a fabric keyboard for use with the BlackBerry that sold for $169 last year. Today, it’s priced at under $130, and by the holiday season, he says, retailers will be selling them for about $80.

The technology and the manufacturing process took years to perfect, says Collins — and that gives the company a leg up on any competition. “Their strength is understanding how to do the wiring and connections and create control surfaces with the right amount of tactile feedback,” says Leslie Fiering, an analyst at Gartner Inc.

“The knitted, woven materials allow us to get x, y and z coordinates,” says Collins. Currently, Eleksen is producing button and scroll controls. Next, it plans to support gestures across the control surface, simulating a mouse or fabric-based touch pad. “It’s a matrix arrangement, similar to what you’d find on touch-screen displays,” Collins says.

Seferidis expects viable bendable displays to be available in about two years. But he is working with vendors to make displays do more than just bend. “Our work will be to make them washable,” he says.

The Sideshow capability is “pretty cool,” says Fiering, but even more interesting will be what designers can dream up if the technology catches on. The most fascinating applications, she says, haven’t even been thought of yet.

Thursday, July 26, 2007

Computer learns vowels like a baby

A team of researchers has developed a computer program that can learn to decipher sounds the way a baby does.

The impetus behind the program was to better understand how people learn to talk, or more specifically, to see whether language is hard-wired in the brain.

Tests of the computer model back up a theory that babies learn to speak by sorting through different sounds until they understand the structure of a language, according to James McClelland, a psychology professor at Stanford University who wrote a paper on the subject that appeared in the Proceedings of the National Academy of Sciences. McClelland was quoted in a an article from Reuters.

McClelland's team found that the computer could track vowel sounds just like a baby. "In the past, people have tried to argue it wasn't possible for any machine to learn these things," he said in the Reuters article.

Sunday, June 10, 2007

The amazing story of the birth of HCL

In 1976, during lunch time at Delhi Cloth Mills, DCM, a group of six young engineers in the office canteen were discussing their work woes at DCM's calculator division.
Despite them all have having jobs that paid them well, they were an unhappy lot -- they wanted to do more, riding on their own gumption. They decided to quit their jobs and start a venture of their own.
The man who was fuelling the ambitions of his five other colleagues at that canteen was a 30-year-old engineer from Tamil Nadu, Shiv Nadar. And this is how the story of Hindustan Computers Limited, HCL began.
Nadar and his five colleagues quit DCM in the summer of 1976. They decided to set up a company that would make personal computers. They had gathered enough technical expertise at DCM's calculator division, but like for all start-ups, getting funds was the problem.
However, Nadar's passion for his new dream company and the support of his enthusiastic colleagues soon made the task very easy.
Founder, Chairman and CEO, HCL Technologies, Shiv Nadar told CNBC-TV18, "The first person I met was Arjun and he was also a management trainee like me. He was a couple of batches junior to me. . . We became very good friends and we are still very good friends. Then, the rest of them all worked for DCM and we all are of similar age, so we used to hang out together, crib together, have fun together, work together.
Nadar would first have to gather cash to give wings to his idea of manufacturing computers. He floated a company called Microcomp Limited -- through which he would sell teledigital calculators. This venture threw up enough cash to allow the founders to give shape to their ultimate dream to manufacture computers in India, at a time when computers were just sophisticated cousins of the good old calculator but support also came from the Uttar Pradesh government.
Finally, the founders put together Rs 20 lakh (Rs 2 million) and HCL was born.
The year after HCL was floated, the Indian government reigned in the ambitions of the foreign companies in India. This pronounced the death knell of companies like IBM and Coca-Cola while bells began to ring for Indian entrepreneurships like HCL.
Managing Editor, The Smart Manager, Dr Gita Piramal says, "Few Indian businessmen were happy when George Fernandes became industry minister in 1977, when the Janata Party came to power. Foreign businessmen were even less happy that Coca-Cola and IBM left India. IBM's leaving, left a major vacuum and this was the vacuum in which Shiv Nadar spotted an opportunity. He stepped in and customers began to trickle in."
HCL started shipping its in-house microcomputers around the same time as its American counterpart Apple, and took only two more years to introduce its 16 bits processor.
By 1983, it indigenously developed a relational data based management system, a networking operational system and client-server architecture, almost at the same time as its global peers. The road to the top was now in sight and HCL took it a step further by exploring foreign shores.
HCL's first brush with international business came about in 1979 when it set up a venture in Singapore; it was called Far East computers. HCL was only three years old and its net worth was around Rs 3 crore (Rs 30 million). Shiv Nadar set up an ambitious target for the venture and notched up sales of Rs 10 lakh (Rs 1 million) in the very first year.
Co-Founder, HCL Technologies, Ajai Chowdhry says, "We discovered that there was a good opportunity to enter Singapore with our own hardware we had manufactured in Singapore. But the strategy was very clearly around selling computerization rather than computers and so we actually took the whole idea of hardware, software solution and service and packaged it and presented it as computerization."
Even as it was basking in its success in Singapore, HCL planned a whole new area of expansion and it tapped into a territory that was lying unexplored in the country - computer education. Sensing the increasing demand for computer training, HCL set up NIIT [Get Quote] in 1981 to impart high quality IT education in India.
Nadar explains, "We knew many people in IIT and Indian Institute of Science. We formed an advisory panel and asked them, can you help us navigate this whole thing and they were very enthusiastic about this and they of course shaken up a little bit when they saw that we started advertising in Bombay -- selling education as a commercial project."
From calculators to IT education, the first five years of HCL was a combination of growth and expansion riddled with uncertainty but the company was now gearing up to set a much bigger target for itself and an announcement from the government would help it takeoff to those soaring heights.
In 1984, the Indian government announced a new policy that would change the fortunes of the entire computer industry. The government opened up the computer market and permitted import of technology. With new guidelines and regulations in place, HCL grabbed the opportunity to launch its own personal computer.
The demand for personal computers was slowly but surely mounting in the Indian market. Most banks were shifting to the UNIX platform. A few companies approached HCL for personal computers, so, the founders flew all over the world to bring back PCs they could take apart, study and reproduce and indigenously upgrade. Their first innovative personal computer was ready in three weeks' times and soon they launched their first range of computers, and they called it the Busybee.
Chowdhry says, "In a lot of ways, it opened up the market because one thing was that, you no longer had to develop basic stuff in India - like operating systems but on the other hand it opened new opportunities like banking because as per government policy, all banking computers must be UNIX based. So, feverishly we set out creating a UNIX based computer and we bought the UNIX source code and created that product out of nothing."
In two years, HCL became one of the largest IT companies in India. The founders now went to different corners of the country to set up sales and marketing offices and it now needed the brightest minds to take it to the next level of competition.
Campus recruitment in management and technical institutes began in full swing and HCL grabbed some of the best talent by offering pay packages that outscored some of the best companies of the time -- Rs 2,000 per month to start with.
The adrenaline rush of the first half of the 1980s and the rapid expansion strategy soon caught up with HCL. A turning point came in 1989, when HCL on the basis of a report by McKinsey and Company decided to venture into the American computer hardware market.
HCL America was born but the project fell flat on its face. HCL had failed to follow a very crucial step necessary to enter the US market. A big disappointment was on its way.
Piramal says, "For every entrepreneur, the US will always remain the dream market. It's the biggest market in the world and Shiv Nadar obviously was drawn to it but he really didn't know what he was getting into. The computers he made didn't get environmental clearances. In fact, HCL probably turned into his biggest mistake but HCL and Shiv himself, he is a very strong person, he understood he was making a mistake, he saw that Infosys [Get Quote] and Wipro [Get Quote] are doing really well in software and he was not too proud to change gears and finally HCL did enter the software market."
It didn't take too long for HCL to brush off the disappointment in the US. Its first failure in the US was set aside in 1991 and HCL entered into a partnership with HP (Hewlett-Packard) to form HCL HP Limited. It opened new avenues for HCL and gave opportunities to firm up its revenues.
In three years, another new possibility came knocking at its door and in 1994, HCL looked beyond PCs and tied up with Nokia cellphones and Ericsson switches for distribution.
Chowdhry explains, 'In 1991, when India didn't have enough foreign exchange. We were in the hardware business and we didn't have enough funds. That's the time when a clear thought entered our minds - that we should globalize and in the very early days, we actually created a joint venture with Hewlett-Packard.
In 1997, HCL was already a multi-dimensional company spun off HCL Technologies Limited to mark their entry into the global software space. It made up its mind to focus on software development, which was twenty years behind its entrepreneurial journey, Shiv Nadar was now ready to take on global competition with all his might.
From 70s to 90s, the HCL story was one of steady rise but in the face of its rapid expansion and continuous flow of achievements, Shiv Nadar didn't anticipate that he would be in for a rude shock and that it would come from someone very close.
In 1998, Arjun Malhotra, Shiv Nadar's comrade and friend decided to leave the company to start his own TechSpan, headquartered in Sunnyvale, California. He was also one of the largest shareholders in HCL Infosystems [Get Quote] at that time. For Shiv Nadar, it was time to think afresh.
The revenues were shrinking from the hardware sector and Nadar now decided to redesign HCL. The company once again needed funds to grow and this time around, Nadar decided to look at the capital market. An initial public offer (IPO) was made on the Indian Stock Exchange in 1999, which was a stupendous success.
President, HCL Technologies, Vineet Nayar says, "The shareholders supported us and then I think we started with Rs 580 an IPO and went up to Rs 2,800 or something like that. So, it was a dream run, I think the shareholders bought the argument we were making, they liked the articulation of the strategy, they liked the management team and they liked the vision we were painting and they supported the stock full time and that was a turning point for HCL."
Shiv Nadar now put aside his dream of becoming a global hardware major and venture into software with an open mind and a clean slate. Technology was opening up vistas of opportunities in the software sector and HCL now wanted to build new businesses.
Global business became a priority, so, now they started a BPO in Ireland in 2001. His partner in this ambitious venture was British Telecom.
The years that followed saw HCL in an expansion mode. In 2005 alone, HCL signed a software development agreement with Boeing for its 787 dreamliner programme. Next came a venture with NEC, Japan.
It even brought out the joint ventures Deutsche Bank and British Telecom's Apollo Contact Center. In the same year, HCL Infosystems launched it sub Rs 10,000 personal computer and joined hands with AMD and Microsoft to bridge the digital divide.
The successes of 2005 spilled over into 2006 and the company now produced over 75,000 machines in a single month, with more parallel joint ventures growing on its list. But in spite of this overwhelming success, Shiv Nadar would not rest. There was a nagging sense of dissatisfaction and perhaps not having exploited its full potential that still drove Nadar and the company to achieve much more.
Thirty years after starting his company, Shiv Nadar really does not have much to complain about. Hindustan Computers Ltd today is an empire worth $3.5 billion with staff strength of 34,000.
But then dissatisfaction has been the quintessential factor that has made Shiv Nadar the visionary that he was and continues to be. Dissatisfaction once drove him to quit his job at DCM and it is that same quality even today, that is driving him to achieve much more when he can quite easily rest on his laurels.