Wednesday, October 31, 2007

RSA '07: New threats could hamper traditional antivirus tools

An emerging breed of sophisticated malware is raising doubts about the ability of traditional signature-based security software to fend off new viruses and worms, according to experts at this week's RSA Security Conference in San Francisco.
Signature-based technologies are now "crumbling under the pressure of the number of attacks from cybercriminals," said Art Coviello, president of RSA, the security division of EMC. This year alone, about 200,000 virus variants are expected to be released, he said. At the same time, antivirus companies are, on average, at least two months behind in tracking malware. And "static" intrusion-detection systems can intercept only about 70 percent of new threats.
Read the latest WhitePaper - Frost & Sullivan Report - Competitive Advantage Today, Competitive Requirement Tomorrow

RSA '07 HQ: Click here for complete coverage

"Today, static security products are just security table stakes," Coviello said. "Tomorrow, they'll be a complete waste of money. Static solutions are not enough for dynamic threats."

What's needed instead are multilayered defenses -- and a more information-centric security model, Coviello said. "[Antivirus products] may soon be a waste of money, not because viruses and worms will go away," but because behavior-blocking and "collective intelligence" technologies will be the best way to effectively combat viruses, he said.

Unlike the low-variant, high-volume threats of the past, next-generation malware is designed explicitly to beat signature-based defenses by coming in low-volume, high-variant waves, said Amir Lev, president of Commtouch Software, an Israeli vendor whose virus-detection engines are widely used in several third-party products.

Until last year, most significant e-mail threats aimed for wide distribution of the same malicious code, Lev said. The goal in writing such code was to infect as many systems as possible before antivirus vendors could propagate a signature. Once a signature became available, such viruses were relatively easy to block.

New server-side polymorphic viruses threats like the recent Storm worm, however, contain a staggering number of distinct, low-volume and short-lived variants and are impossible to stop with a single signature, Lev said. Typically, such viruses are distributed in successive waves of attacks in which each variant tries to infect as many systems as possible and stops spreading before antivirus vendors have a chance to write a signature for it.
Storm had more than 40,000 distinct variants and was distributed in short, rapid-fire bursts of activity in an effort to overwhelm signature- and behavior-based antivirus engines, Lev said.
One example of such malware is WinTools, which has been around since 2004 and installs a toolbar, along with three separate components, on infected systems. Attempts to remove any part of the malware cause the other parts to simply replace the deleted files and restart them. The fragmented nature of such code makes it harder to write removal scripts and to know whether all malicious code has actually been cleaned off a computer.

New version of Storm virus infects blogs and other Web postings

A new version of the Storm e-mail virus is populating blogs and online bulletin boards with links directing people to a Web site that is propagating the worm, representing a new mode of attack for hackers seeking financial gain, according to a security vendor that became aware of the virus Monday night.
the Storm Worm attacks in December and January used infected e-mails to hijack personal computers and add them to “bot-nets,” networks of infected computers used by hackers to distribute spam and viruses.
Within the past day, a variation of this virus was found to be using infected computers to place malicious links on various Web sites, according to Secure Computing, a messaging security vendor based in San Jose, Calif.

If your computer is infected, the virus can add malicious text to any message you post to a blog or bulletin board. The text says, “Have you seen this?” and is followed by a URL containing the phrases “freepostcards” and “funvideo.”

“The new thing about this virus is the way it propagates. It’s basically filling up Web pages all over the Internet with links to the malware,” says Dmitri Alperovitch, principal research scientist for Secure Computing.

A Google search on Tuesday afternoon located 71 sites containing the link, including message boards hosted by the Salt Lake Tribune and a site about Australian pythons and snakes.

Clicking on the link causes the virus to be downloaded to the user’s computer. “It turns you into a zombie. Your computer is now under full control under the criminal that is in control of this bot-net,” Alperovitch says.
The virus is a rootkit that integrates fully into an operating system, so it scans traffic to and from your machine and could intercept your bank account information or other sensitive data. The bot-net can also be used to launch an attack against a Web site, effectively shutting the site down by flooding it with traffic from infected computers, Alperovitch states. Hackers sometimes launch these attacks so they can demand ransom money from Web site owners in exchange for stopping the attack, according to Alperovitch.
Some antivirus programs have trouble finding the virus, he says, but you can figure out if your computer is infected by posting to a blog or bulletin board and seeing if your message contains the malicious link.

Typically, though, a user will not realize he or she is infected, and people who read postings to blogs and bulletin boards may be fooled into thinking the link should be trusted.

“Because they’re not looking to destroy data on your machine, you may not realize until much later that anything is happening,” Alperovitch says.

Gathering 'Storm' Superworm Poses Grave Threat to PC Nets

The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.

Although it's most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm, and I've seen estimates that between 1 million and 50 million computers have been infected worldwide.

Old style worms -- Sasser, Slammer, Nimda -- were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.

Worms like Storm are written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.

Storm represents the future of malware. Let's look at its behavior:

1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.

2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. By only allowing a small number of hosts to propagate the virus and act as command-and-control servers, Storm is resilient against attack. Even if those hosts shut down, the network remains largely intact, and other hosts can take over those duties.

3. Storm doesn't cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect, because users and network administrators won't notice any abnormal behavior most of the time.

4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn't have a centralized control point, and thus can't be shut down that way.

This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn't show up as a spike. Communications are much harder to detect.

One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won't work with Storm: An infected host may only know about a small fraction of infected hosts -- 25-30 at a time -- and those hosts are an unknown number of hops away from the primary C2 servers.

And even if a C2 node is taken down, the system doesn't suffer. Like a hydra with many heads, Storm's C2 structure is distributed.
5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called "fast flux." So even if a compromised host is isolated and debugged, and a C2 server identified through the cloud, by that time it may no longer be active.

6. Storm's payload -- the code it uses to spread -- morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.

7. Storm's delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites -- anything to entice users to click on a phony link. Storm also started posting blog-comment spam, again trying to trick viewers into clicking infected links. While these sorts of things are pretty standard worm tactics, it does highlight how Storm is constantly shifting at all levels.

8. The Storm e-mail also changes all the time, leveraging social engineering techniques. There are always new subject lines and new enticing text: "A killer at 11, he's free at 21 and ...," "football tracking program" on NFL opening weekend, and major storm and hurricane warnings. Storm's programmers are very good at preying on human nature.

9. Last month, Storm began attacking anti-spam sites focused on identifying it -- spamhaus.org, 419eater and so on -- and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy's reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can't imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn't work in any case: Storm's creators could easily design another worm -- and we know that users can't keep themselves from clicking on enticing attachments and links.

Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it's a really bad idea in real life. We simply don't know how to stop Storm, except to find the people controlling it and arrest them.

Unfortunately we have no idea who controls Storm, although there's some speculation that they're Russian. The programmers are obviously very skilled, and they're continuing to work on their creation.

Oddly enough, Storm isn't doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.

Personally, I'm worried about what Storm's creators are planning for Phase II.

- - -

Bruce Schneier is CTO of BT Counterpane and author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World.

Tuesday, August 28, 2007

second life

TV Station¿ Television Broadcasting in Second Life

Posted Jul 31, 2007

TV Station produces movie (machinima) for your advertising and also promote it via our SL Television. We have a large complete broadcasting studio so that we can produce various kinds of Machinima such as SL news reporting, talk show, concert, and island presentation. If you have your own machinima, you can also promote it via SL Television. See SLnews - www.slnews.tv See TV Station - http://slurl.com/secondlife/TVstation/172/84/22

new animation movie out

Bee Movie - Trailer

Posted Jul 18, 2007

Barry B. Benson is a graduate bee fresh out of college who is disillusioned at his lone career choice: making honey. On a rare trip outside the hive, Barry's life is saved by Vanessa, a florist in New York City.

Wikipedia:10 things you did not know about images on Wikipedia


This image is free, so I can use it here and you can use it too!
10 things you did not know about images on Wikipedia is a list of insights about Wikipedia specifically targeted at people who have limited foreknowledge about images on the project, such as new editors and new readers. These explanations should not surprise experienced editors, but hopefully will help the rest of the world to shape an informed opinion of our work and understand why sometimes it seems we do not have an "easy to get image" of something.
1. We want images.
However, we primarily want freely licensed images which are compatible with our policies and goal of creating a free resource for everyone. If a picture is worth a thousand words, a free one is giving a thousand words to everyone who wants to use it and will ever see it; a non-free one only gives visitors to this single website a thousand words.
We depend on people like you to author and contribute images for Wikipedia, and the rest of the world, to use, as long as you are willing to release the images under a free license.
When we say freely licensed we're talking about the freedom of use the public has with images, not the price.
More information: Wikipedia:Requested pictures, The Definition of Free Content
1. We want usable images.
Please do not upload images that should not or can not be used in an article. While we permit a limited amount of images for users to use on their user page, we do not need a 9th image of your Jack Russell Terrier on that article. The Wikimedia Foundation is not a free webspace for your images. Please use a website designed for this.
2. We have 750,300 images.
We reached 1 million images on all Wikipedia projects in July of 2006. There are over 1.75 million images on Commons. On the English Wikipedia we have over 750,000 images. You can help Wikipedia by going through images and looking for problems such as missing source and licenses. You can nominate images for deletion via IFD. You can cleanup images too; see Wikipedia:Images for cleanup.
More information: Special:Statistics, Commons:Special:Statistics, Special:Unusedimages.
3. All images uploaded must have a source and license.
Failure to provide a source (who made it) and license (how it can be used) will result in the image being deleted, possibly as soon as 48 hours. You must provide this information for all images uploaded. No ifs, ands, or buts about it. You must provide this information for all images you upload, with no exceptions, or the images will be deleted. Blatant and completely unjustifiable violations of copyright law and our image policies can be (and are) deleted almost immediately.
We have a long term mission to create and promote content which is free of the typical encumbrances of copyright law. This mission requires us to take copyright very seriously. Unlike (most) other websites that allow user submission and generation of content, we aggressively remove all copyright infringements as soon as we can find them and block people who willfully ignore this after being warned.
1. Because free content is a fundamental part of our mission our policy on image licensing is more restrictive than required by law.
More information: Wikipedia:Non-free content
4. Use non-free images only when nothing else is possible.
Do not go to the nearest website and grab an image of a person/place/building. It is extremely likely that image is both copyrighted and fails our Non-free content policy, which states that a non-free image may be used only when it cannot be replaced. For example, there's no way whatsoever that a logo of a political party or a screenshot of a video game can be replaced by a free image, but a photo of someone or a certain location can almost always be replaced, even if doing so may be very difficult. Search for free images, especially for living persons, existing buildings and places. Don't upload an image just because the article doesn't have one right now; we can (and will) wait for a free image to be created or released. If you are going to upload a non-free image, see Wikipedia:Non-free content criteria first. Wikipedia:Fair use rationale guideline will be helpful as well.
More information: Wikipedia:Non-free content criteria #1
5. Non-commercial, educational-only and non-derivative images are NOT "free" images.
All such restrictions unacceptably limit how other people may use the image off of Wikipedia, which completely contradicts the entire point of "free images" and "Free content" (See above). These licenses are not valid at all and such images must be justified as "fair use" or they will be immediately deleted.
6. No one has a perfect understanding of the copyright law.
Even if you are licensed attorney who practices in this area, US copyright law (which applies to Wikipedia) is complex, and while an understanding of how it applies to Wikipedia may be achievable, there is a considerable gray area and deciding the status of one image in a complex situation can be very difficult, if not impossible, at times.
7. We have an image use policy.
Once an image is uploaded and correctly sourced and licensed, it may then be used in articles. See Wikipedia:Image use policy, which describes the accepted ways of displaying, formatting, etc. images. If you use images in an article, you should be familiar with it. Example: Did you know ... that the largest an image can be displayed in an article is 550 pixels wide?
8. Ideally, there would not be any images stored on Wikipedia, they would all be on Wikimedia Commons.
Because we want free content, all images uploaded would ideally be free for everyone and therefore would be acceptable on our sister project, Wikimedia Commons. Images submitted to Commons are automatically available here... and on hundreds of other Wikis run by the Wikimedia Foundation. If you're looking for an image for an article, be sure to search using the commons Mayflower search.
More information: Wikipedia:Free image resources
9. Uploading the same image 8 times is not needed.
You can edit the image page! Just like every other page on Wikipedia, the image description page can be edited by anyone. Just click "edit this page" while looking at the image page. Forgot to license or give the source for the image when you uploaded it? Do not re-upload the image: edit the image description page and add the license!
Also, the wiki software can control the size of the images, so you do not need to re-upload a smaller version of the same image. See Wikipedia:Extended image syntax. There, you can learn how to use frames, control the position in the article and about captions! For more on captions, see Wikipedia:Captions.
10. You can use (free) images on Wikipedia yourself, anywhere you like.
You can use images that are on Commons and free images on Wikipedia provided you comply with the individual image license terms, not (necessarily) the GFDL. While all articles text are licensed under the GFDL, free images have several licenses to choose from. See Wikipedia:Image copyright tags/Free licenses for the many possibilities. You can use them on any page on Wikipedia. You can even use them OFF Wikipedia, such as on a website, printed material, anywhere! All "Free" image licenses allow these uses.
1. You cannot use non-free images anywhere except in relevant encyclopedia articles.
Non-free images can only be used in the article namespace where they have a rationale for existing. You cannot use them anywhere else, such as on policy pages, discussion pages, templates, or user communication pages. If you need to discuss them, link to them by putting a colon (:) between the "[[" and "Image:" like this: [[:Image:Imagenamegoeshere.ext]]

More information: Wikipedia:Non-free content criteria #9
[edit] Other media
All the above applies to audio and video too.
We allow other forms of media, such as audio and even video. The same rules apply for these media as they do for images.

Wikipedia:10 things you did not know about Wikipedia

10 things you did not know about Wikipedia is a list of insights about Wikipedia specifically targeted at people who have limited prior experience with the project, such as journalists, new editors, and new readers. These explanations should not surprise experienced editors, but hopefully will help the rest of the world to shape an informed opinion of our work.
More information: http://en.wikipedia.org/wiki/Wikipedia:About
1. We are not for sale.
If you are waiting for Wikipedia to be bought by your friendly neighborhood Internet giant, do not hold your breath. Wikipedia is a non-commercial website run by the Wikimedia Foundation, a 501(c)(3) non-profit organization based in St. Petersburg, Florida. We are supported by donations and grants, and our mission is to eventually bring free knowledge to everyone.
More information: http://wikimediafoundation.org/
2. Our work can be used by everyone, with a few conditions.
Wikipedia has taken a cue from the free software community (which includes projects like GNU/Linux and Mozilla Firefox) and done away with traditional copyright restrictions on our content. Instead, we have adopted what is known as a "free content license" (specifically, the GFDL): all text and composition created by our users are and will always remain free for anyone to copy, modify, and redistribute. We only insist that you credit the contributors, and that you do not impose new restrictions on the work or any improvements you make to it. Many of the images, videos, and other media on the site are also under free licenses, or in the public domain. Just check a file's description page to see its licensing terms.
More information: http://en.wikipedia.org/wiki/Wikipedia:Copyrights
3. We speak Banyumasan…
…and about 250 other languages. Granted, only about 60 of those Wikipedia language editions currently have more than 10,000 articles — but that is not because we are not trying. Articles in each language are generally started and develop differently from their equivalents in other languages, although some are direct translations, which are always performed by volunteer translators, and never by machines. The Wikimedia Foundation is supported by a growing network of independent chapter organizations, already in seven countries, which help us to raise awareness on the local level. In many countries, including the United States, Wikipedia is among the ten most popular websites.
More information: http://meta.wikimedia.org/wiki/List_of_Wikipedias and http://www.alexa.com/data/details/traffic_details?q=&url=wikipedia.org
4. You cannot actually change anything in Wikipedia...
…you can only add to it. Wikipedia is a database with an eternal memory. An article you read today is just the current draft; every time it is changed, we keep both the new version and a copy of the old version. This allows us to compare different versions, or restore older ones as needed. As a reader, you can even cite the specific copy of an article you are looking at. Just link to the article using the "Permanent link" at the bottom of the left menu, and your link will point to a page whose contents will never change. (However, if an article is deleted, you cannot view a permanent link to it unless you are an administrator.)
More information: http://en.wikipedia.org/wiki/Wiki
5. We care deeply about the quality of our work.
Wikipedia has a complex set of policies and quality control processes. Editors can patrol changes as they happen, monitor specific topics they know about, follow a user's track of contributions, tag articles with problems for other editors to work on, report vandals, discuss the merits of each article with other users, and a lot of other things. Our best articles are awarded "featured article" status, and problem pages are nominated for deletion. "WikiProjects" focus on improvements to particular topic areas. Really good articles may go into other media and be distributed to schools through Wikipedia:1.0. We care about getting things right, and we never stop thinking about new ways to do so.
More information: http://en.wikipedia.org/wiki/Wikipedia:Community_Portal, http://en.wikipedia.org/wiki/Wikipedia:Why_Wikipedia_is_so_great, http://en.wikipedia.org/wiki/Wikipedia:Attribution, http://en.wikipedia.org/wiki/Wikipedia:Verifiability
6. We do not expect you to trust us.
It is in the nature of an ever-changing work like Wikipedia that, while some articles are of the highest quality of scholarship, others are admittedly complete rubbish. We are fully aware of this. We work hard to keep the ratio of the greatest to the worst as high as possible, of course, and to find helpful ways to tell you what state an article is currently in. Even at its best, Wikipedia is an encyclopedia, not a primary source, with all the limitations it entails. We ask you not to condemn Wikipedia, but to use it with an informed understanding of what it represents. Also, as some articles may contain errors, please do not use Wikipedia to make important decisions.
More information: http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer
7. We are not alone.
Wikipedia is part of a growing movement for free knowledge that is beginning to permeate science and education. The Wikimedia Foundation directly operates eight sister projects to the encyclopedia: Wiktionary (a dictionary and thesaurus), Wikisource (a library of source documents), Wikimedia Commons (a media repository of more than one million images, videos, and sound files), Wikibooks (a collection of textbooks and manuals), Wikiversity (an interactive learning resource), Wikinews (an experiment in citizen journalism), Wikiquote (a collection of quotations), and Wikispecies (a directory of all forms of life). Like Wikipedia itself, all these projects are freely licensed and open to contributions.
More information: http://wikimediafoundation.org/wiki/Our_projects
8. We are only collectors.
Articles in Wikipedia are not signed, and contributors are unpaid volunteers. Whether you claim to be a tenured professor, use your real name or prefer to remain without an identity, your edits and arguments will be judged on their merits. We require that sources be cited for all significant claims, and we do not permit editors to publicize their personal conclusions when writing articles. Editors must follow a neutral point of view; they must only collect relevant opinions which can be traced to reliable sources.
More information: http://en.wikipedia.org/wiki/Wikipedia:Five_pillars, http://en.wikipedia.org/wiki/Wikipedia:Verifiability
9. We are not a dictatorship nor any other political system.
The Wikimedia Foundation is controlled by its Board of Trustees, the majority of whom the Bylaws require to be chosen from its community. The Board and Wikimedia Foundation staff does not take a role in editorial issues, and projects are self-governing and consensus-driven. Wikipedia founder Jimmy Wales occasionally acts as a final arbiter on the English Wikipedia, but his influence is based on respect, not power; it takes effect only where the community does not challenge it. Wikipedia is transparent and self-critical; controversies are debated openly and even documented within Wikipedia itself when they cross a threshold of significance.
More information: http://en.wikipedia.org/wiki/Criticism_of_Wikipedia, http://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not
10. We are in it for the long haul.
We want Wikipedia to be around at least a hundred years from now, if it does not turn into something even more significant. Everything about Wikipedia is engineered towards that end: our content licensing, our organization and governance, our international focus, our fundraising strategy, our use of open source software, and our never-ending effort to achieve our vision. We want you to imagine a world in which every single human being can freely share in the sum of all knowledge. That is our commitment — and we need your help.
More information: http://wikimediafoundation.org/

Monday, August 27, 2007

COMDEX Las Vegas 2003 to Examine Latest In IT Security; Sessions to Focus on Security Strategies for Today's Heterogeneous Business Environments

Business Editors/High-Tech Writers

COMDEX Las Vegas 2003

SAN FRANCISCO--(BUSINESS WIRE)--Oct. 31, 2003

MediaLive International, Inc., producer of the world's best-known events, related media and marketing services for technology buyers and sellers, announced today that security will be a key focus of COMDEX Las Vegas 2003, November 16-20, 2003. Through a series of discussion panels, educational tutorials and hands-on demonstrations, COMDEX will highlight trends in current security practices for organizations of all sizes, the costs and benefits of each, as well as what the future holds, including wireless security, secure Web services, spam reduction and biometrics.

"This year's attendees will agree that COMDEX is the best place for the IT buying community to immerse themselves in mission-critical topics such as ensuring the security of their organization's intellectual property, Web domains, email systems, software systems and wireless infrastructure," said Eric Faurot, vice president and general manager of COMDEX. "With emerging technologies and standards such as Web services, Wi-Fi and Linux, it is critical that technology buyers understand security in the context of their overall architecture -- only COMDEX provides the broad-based, neutral platform required for that scale of interaction."
Advertisement

Security leaders and emerging companies who will be involved in COMDEX Las Vegas 2003 include: Cerberian; Computer Associates; Counterpane Internet Security; Cryptography Research; McAfee; Microsoft; Nokia; SonicWALL; Symantec Corp.; and more. The conference will showcase solutions for the key security issues businesses face including securing wireless networks, solutions for fighting Spam, how to securely deploy Web services and open-source software, authentication, virus protection and biometrics.

COMDEX Security Education Sessions

The COMDEX security conference will be anchored by a keynote address from Symantec Corp. CEO John W. Thompson, at 9 AM, Wednesday, November 19, in room C5 of the Las Vegas Convention Center.

In addition, the event will feature a Security Power Panel, "How Much Security is Enough?" at 11 AM on Monday, November 17, in room N109. Moderated by Tom Standage, technology correspondent for The Economist, the panel will focus on what businesses need for effective, efficient security. Panel participants include Christian Byrnes, vice president and service director for the META Group; Ben Golub, Verisign's senior vice president of Security, Payments and Managed Security; Dan MacDonald, vice president, Nokia; Ron Moritz, chief security strategist for Computer Associates; and Bruce Schneier, CTO and founder of Counterpane Internet Security.

Continuous security-related presentations will take place at the COMDEX Security Innovation Center, where experts from SonicWALL, McAfee, Cerberian and the ASCII group will share their knowledge and oversee attendees participating in hands-on, business security challenges.

Session topics at COMDEX's Security Conference include:

-- Deploying Wireless LANs Securely

-- Intrusion Prevention Systems

-- Security Policy

-- Where Hardware Security Meets Software Security - Weak Points

and Real Attacks

-- Identity Management - The Future of Security

-- Making Sense out of Web Services Security

-- Dealing With Spam

-- Antivirus Measures: Critical Business Process or Costly

Overhead?

-- Web Threats and Countermeasures

-- Securing Microsoft IIS

-- Choosing the Authentication Method that's Right for Your

Organization

-- The Promise of Biometric Technologies

-- Deploying Biometrics in Your Workplace

COMDEX also is offering five specific, full-day security tutorials, November 16-17, each concentrating on a different issue, such as defeating junk mail, usage of Secure Sockets Layer, intrusion detection and a live, hacking demonstration.

COMDEX Las Vegas 2003 focuses on IT in the B2B marketplace and covers seven core technology themes: Linux and Open Source, Wireless and Mobility, the Digital Enterprise, Web Services, Windows Platform, On-Demand Computing and Security. Together, these themes represent the fastest growing areas of technology advancement that will drive the majority of market innovation in support of user needs in 2003 and beyond. For further information about the extensive security tracks COMDEX Las Vegas offers this year, visit www.comdex.com.

How to Register for COMDEX

Online registration is available immediately at www.comdex.com/lasvegas2003/register, or by calling toll free at 888-508-7510 or 508-743-0186 (outside the U.S.). Call center hours are Monday-Friday 7 AM - 5 PM, Pacific. Closed Saturday and Sunday.

About COMDEX

Part of the MediaLive International, Inc. family of global brands, COMDEX hosts educational forums, events and conferences that focus on the technology areas most critical to today's IT buyer. COMDEX fosters ongoing collaboration, communication and commerce for the $879 billion IT market by connecting IT vendors with decision makers in Global 2000 companies. Upcoming regional events include COMDEX Sweden 2004, January 23-25, in Goteborg; COMDEX Saudi Arabia 2004, March 14-17, in Jeddah; and COMDEX Canada 2004, March 24-26, in Toronto.

About MediaLive International, Inc.

MediaLive International is producer of the world's best-known events, related media and marketing services for technology buyers and sellers. MediaLive International's products and services encompass the IT industry's largest exhibitions, including COMDEX and NetWorld+Interop, such highly focused educational programs as BioSecurity and Next Generation Networks, custom seminars including JavaOne, respected publications including Business Communications Review, and specialized vendor marketing programs. Created in 2003 from the assets of Key3Media, MediaLive International is a privately held company headquartered in San Francisco, with offices throughout the world. For more information about MediaLive International, visit www.medialiveinternational.com.

MediaLive International, COMDEX, NetWorld, NetWorld+Interop, Next Generation Networks, Business Communications Review, BioSecurity and associated design marks and logos are trademarks or service marks owned or used under license by MediaLive International, Inc., and may or may not be registered in the United States and other countries. Other names mentioned may be trademarks or service marks of their respective owners.

COPYRIGHT 2003 Business Wire
COPYRIGHT 2003 Gale Group

Thursday, August 23, 2007

Skype recounts tale of 'perfect storm' outage

It was a dark and stormy upgrade, and it won't happen again they say

Peter Sayer

August 21, 2007 (IDG News Service) -- The situation that prevented millions of people from accessing Skype Ltd.'s Internet telephony service late last week was a "perfect storm" and should not reoccur, the company said Tuesday.

The company initially attributed the problem, which began on Aug. 16, to the near-simultaneous rebooting of millions of computers, as Skype users running the Windows operating system attempted to reconnect to the service after downloading a series of routine software patches from Microsoft Corp.'s Windows Update service.

Skype's service relies on some of its users' computers to act as "supernodes," routing traffic for other, less well-connected, users. But as Skype customers tried to reconnect, many of those supernodes were themselves in the process of rebooting. The remaining supernodes were soon overwhelmed because a bug in the company's software did not efficiently allocate the network resources available.

Users were skeptical of this explanation. Microsoft regularly issues patches that may cause Windows computers to reboot, and they haven't caused problems for Skype before. Microsoft releases software updates on the second Tuesday of each month, a day known to systems administrators as "Patch Tuesday."

Skype spokesman Villu Arak offered a more detailed explanation of Skype's outage on Tuesday: Last week's problems were the result of a "perfect storm" of exceptionally high traffic through the service at the same time as the Windows Update process led to a shortage of supernodes in the service's peer-to-peer network.

The company did not offer an explanation for the high traffic, but accepted full responsibility for the software problem.

"Skype and Microsoft engineers went through the list of patches that had been pushed out," Arak wrote. "We ruled each one out as a possible cause for Skype's problems. We also walked through the standard Windows Update process to understand it better and to ensure that nothing in the process had changed from the past (and nothing had)."

The catastrophic effect on Skype's service was entirely Skype's fault -- a result of its software being unable to deal with simultaneous high load and supernode rebooting, according to Arak.

On Aug. 17, the day after the problems began, Skype released a new version of its software client for Windows to correct the problem. That update should behave better the next time high traffic coincides with a scarcity of supernodes, he said.

Skype had updated versions of its software client for Windows, Mac and Linux since July's patch Tuesday and before last week's outage, but the changes made in those updates were not responsible for the problem, according to company spokeswoman Imogen Bailey.

Reprinted with permission from

idg.net
Story copyright 2006 International Data Group. All rights reserved.

Zink Imaging LLC's Zink: Inkless Photo Printing


Mark Hall

August 20, 2007 (Computerworld) -- Zink Imaging LLC Zink

Johannes Gutenberg might have judged the folks at Zink Imaging LLC as heretics. After all, they have removed ink from the publishing process, eliminating what has been a fundamental element of printing since the first Bible rolled off the press in 1456. Instead, the scientists at the Waltham, Mass., firm focused their genius on the other key part of the printing paradigm — paper.

“The magic is in the paper,” says Stephen Herchen, chief technology officer at Zink, which is short for “zero ink.”

Herchen says Zink started as a project inside Polaroid Corp. in the 1990s before the storied camera company spun out Zink as a fully independent entity in 2005. The technology invented at Polaroid and perfected by Zink uses millions of colorless dye crystals layered under polymer-coated paper, making the prints durable enough for long-lasting photos. When the crystals are heated at different temperatures at specific intervals, they melt onto the paper in the traditional cyan, magenta, yellow and black used by ink-jet, laser and other printing devices. At the Demo Conference in Palm Desert, Calif., earlier this year, Herchen showed this reporter how it worked by lighting a match and holding it under the sample blank white paper to get the crystals to melt into a rainbow of colors.

Wow Factor
Inkless Photo Printing



Dye crystals embedded in special paper become colored when a printhead heats and activates them.

According to Zink’s CTO, like a lot of scientific teams involved in breakthrough projects, the 50 chemists and physicists involved at Polaroid and then Zink went through a long process of trial and error to create the right combination of molecules that could be controlled on the paper. And the final result had the look and feel of a regular photograph, he says.

Another upside is that consumers will no longer have to dispose of environmentally iffy ink cartridges, Herchen says. And pricing will be less than $2 per 10-sheet pack, the company says.

IDC analyst Ron Glaz says the technology is certainly innovative, but he says not to expect it to replace a desktop or network printer anytime soon. “It’s a niche product for a niche market,” he says.

Scott Wicker, Zink’s chief marketing officer, doesn’t disagree. What sets Zink apart is that it “enables printing where it doesn’t currently exist,” he says, explaining that without the need for ink cartridges or ribbons, printers can now be built into small, mobile devices such as digital cameras.

Wicker says the company will control the manufacture of the Zink printer paper, but partners will build and distribute printer products, an approach Glaz says could improve the likelihood of Zink’s success. Wicker adds there are no restrictions on the paper size that Zink can produce, although he says that the first paper to ship with products late this year will be 2 by 3 inches.

Cleversafe's Dispersed Storage

Robert L. Scheier

August 20, 2007 (Computerworld) -- Cleversafe Inc.

Cleversafe Dispersed Storage Unique algorithms disperse data over the Internet to servers on a grid

After selling his music services company MusicNow to Circuit City Stores Inc. in 2004, Chris Gladwin took a break to organize his own music and photos. It was then that Gladwin realized the usual method — storing multiple copies of data — was complicated and expensive.

A longtime inventor with an interest in cryptography, Gladwin developed algorithms to securely split and save data among multiple nodes and reassemble it when needed. That November, he founded Cleversafe Inc. to commercialize his work. Now a 29-person company, with Gladwin serving as president and CEO, Cleversafe is funded by more than $5 million from Gladwin and other early employees as well as “angel” and venture investors.

Wow Factor
Slice-and-Dice Storage

Unique algorithms slice, scramble, compress and disperse data over the Internet to servers on a grid.


Cleversafe’s Dispersed Storage software splits data into 11 “slices” of bytes, each of which is encrypted and stored on a different server across the Internet or within a single data center. This approach provides security, says Gladwin, because no one slice contains enough information to reconstitute any usable data. The self-healing grid provides up to 99.9999999999% reliability because data can be reconstituted using slices from any six nodes. Scalability is ensured, Gladwin says, because adding more storage requires merely adding servers to the grid or storage to the existing servers.

Among the biggest cost savers, says Gladwin, was the reduction in total storage needs achieved by eliminating the need for separate copies for backups, archives or disaster recovery. Compared with ratios of 5-to-1 or 6-to-1 of “extra” vs. original data in copy-based storage environments, Cleversafe requires ratios of 1.3-to-1 or less. While Gladwin has no specifics on how his software will be priced, he says customers should see savings “at least proportional” to the reduction in total stored data.

Originally, the team thought in terms of gigabytes of data to be stored. “Now,” says Gladwin, “we think in terabytes and even occasionally petabytes.” He says the first target will be secondary storage, where Dispersed Storage could replace tape and optical drives for backup and archiving.

This approach could “completely change the way storage administrators conduct their daily operations,” says John Webster, an analyst at Illuminata Inc. n Scheier is a freelance writer in Boylston, Mass. Contact him at rscheier@charter.net.

Ghost Inc.'s Ghost: The Everywhere OS

Gary Anthes
ugust 20, 2007 (Computerworld) -- Ghost Inc. Ghost

Ghost is founded on the passionate belief that the Windows and Mac model of your operating system — with your precious applications and data all walled inside one physical computer — is obsolete,” says Ghost’s creator, Zvi Schreiber.

The Global Hosted Operating System, or Ghost, is the logical next step in a trend to move applications and files from client computers to the Internet, says Schreiber. It is a Web-hosted image of your desktop or laptop — a virtual computer that can be accessed by any client device via a Web browser.

Ghost doesn’t require software upgrades or patches for user machines, and it’s always backed up. But its key selling point is the mobility and device-independence it offers users, says Schreiber, CEO of start-up Ghost Inc. in New York. “Young people do a lot of computing at school, and business people don’t want to carry their laptops everywhere,” he says. “People want to get their computing environment from anywhere.”

Wow Factor
The Everywhere OS

Free PC environment can be accessed from any browser, with single online file system, single log-in and file sharing.


Offered as a set of application services inside a virtual computer, Ghost is free for users. Schreiber says revenue will come from vendors who remit fees to the company when they sell products or services to Ghost users.

Ghost is in an alpha, “open to the public” release, Schreiber says, and it’s available at http://g.ho.st/home. “We don’t feel we are offering a complete service by any means, mainly in terms of the number of applications that are nicely integrated into Ghost,” he says. “But that’s changing pretty rapidly. By the third quarter, it will be a beta [release]. Not perfect, but really usable.”

Ghost users can’t use client-based applications like Microsoft Word or Excel, but they can use Web-based alternatives such as Google Docs & Spreadsheets. Schreiber says over the next year or so, he will seek partners to create Web-hosted versions of all popular desktop programs and help users migrate their data to them.

Rick Boyd, a Catholic priest and self-described “computer nerd” in Park Rapids, Minn., says he uses Ghost to host and manage his bookmarks, files and documents. “I use it every day, and I find it very convenient,” he says. “I believe that Web-based applications are the future.”

Boyd says he likes being able to access his bookmarks from any computer, no matter where he might be. “But it’s not just a bookmark manager,” he says, “it’s a virtual desktop, and that’s very innovative.”

Ghost is built from OpenLaszlo, an open-source platform for the development and delivery of Web applications that have the appearance and functions of traditional desktop applications. Ghost is hosted by Amazon Web Services.

Despite the use of OpenLaszlo and Amazon, Ghost developers still had to write a fair amount of software and do considerable systems integration work, Schreiber says.

“We had to think about the architecture very carefully to make it scalable, robust and secure,” he says. Scalablility was enhanced by pushing some of the processing and memory use from the server to clients, Schreiber adds.

10 Cool Cutting-Edge Technologies on the Horizon Now

Eleksen Group's wearable gadgetry kicks off this year's Horizon Award winners

Robert Mitchell

August 20, 2007 (Computerworld) -- Eleksen Group PLC Sideshow Wearable Display Module

It started as an idea for making more life-like puppets for the British TV show Spitting Image. Four years later, Eleksen Group PLC is hoping that its interactive textile technology will form the foundation for a new generation of wash-and-wear computer control and display devices.

The centerpiece of the technology is ElekTex, a fabric-based, pressure- sensitive control interface that can be integrated into jackets, bags and other textile products. The technology is already used as a remote control for iPods and cell phones in backpacks and coats. At this year’s Consumer Electronics Show in Las Vegas, Eleksen presented its latest design concept, which integrates ElekTex fabric controls with an LCD display that can interact with Windows Vista’s Sideshow feature. The latter exports information from a Vista laptop to a secondary display. Mini-applications, or “gadgets,” written for Sideshow can then wirelessly deliver e-mail, alerts or other updates to the remote screen even if the laptop remains in its case and turned off. Fabric-based controls and embedded control electronics interact with the display. Iver Heath, England-based Eleksen is also planning support for secondary displays on the Macintosh.

Wow Factor
Wearable Gagdetry



Schedules and recent e-mails are viewed without powering up a laptop, through a fabric-embedded module.


Initial implementations of ElekTex will likely be integrated into laptop bags with embedded button controls and small color LCD displays, says John Collins, vice president of marketing and business development at Eleksen. However, Collins envisions an eventual move to flexible displays based on color organic LED technology. That would allow the control and display surfaces to be embedded on any fabric surface, including a shirt. “Imagine receiving critical information from enterprise information systems on your sleeve,” says Vassilis Seferidis, vice president of product management.

ElekTex fabrics are constructed from woven layers of nylon and carbon-impregnated nylon that’s not only bendable, but also washable. Because of the nature of the material, it can be sewed, glued or even heat-welded into other fabrics. Mark Treger, national sales manager at Goodhope Bags Inc. in Chino, Calif., has embedded ElekTex sensors into backpacks to control iPods. “You can just sew through it. It just works,” he says. The one limitation is cost. Collins estimates that a laptop bag with the technology would cost about $200. But Treger says the cost of the ElekTex technology has already dropped by 50% in the past year. He sells a fabric keyboard for use with the BlackBerry that sold for $169 last year. Today, it’s priced at under $130, and by the holiday season, he says, retailers will be selling them for about $80.

The technology and the manufacturing process took years to perfect, says Collins — and that gives the company a leg up on any competition. “Their strength is understanding how to do the wiring and connections and create control surfaces with the right amount of tactile feedback,” says Leslie Fiering, an analyst at Gartner Inc.

“The knitted, woven materials allow us to get x, y and z coordinates,” says Collins. Currently, Eleksen is producing button and scroll controls. Next, it plans to support gestures across the control surface, simulating a mouse or fabric-based touch pad. “It’s a matrix arrangement, similar to what you’d find on touch-screen displays,” Collins says.

Seferidis expects viable bendable displays to be available in about two years. But he is working with vendors to make displays do more than just bend. “Our work will be to make them washable,” he says.

The Sideshow capability is “pretty cool,” says Fiering, but even more interesting will be what designers can dream up if the technology catches on. The most fascinating applications, she says, haven’t even been thought of yet.

Thursday, July 26, 2007

Computer learns vowels like a baby

A team of researchers has developed a computer program that can learn to decipher sounds the way a baby does.

The impetus behind the program was to better understand how people learn to talk, or more specifically, to see whether language is hard-wired in the brain.

Tests of the computer model back up a theory that babies learn to speak by sorting through different sounds until they understand the structure of a language, according to James McClelland, a psychology professor at Stanford University who wrote a paper on the subject that appeared in the Proceedings of the National Academy of Sciences. McClelland was quoted in a an article from Reuters.

McClelland's team found that the computer could track vowel sounds just like a baby. "In the past, people have tried to argue it wasn't possible for any machine to learn these things," he said in the Reuters article.

Sunday, June 10, 2007

The amazing story of the birth of HCL

In 1976, during lunch time at Delhi Cloth Mills, DCM, a group of six young engineers in the office canteen were discussing their work woes at DCM's calculator division.
Despite them all have having jobs that paid them well, they were an unhappy lot -- they wanted to do more, riding on their own gumption. They decided to quit their jobs and start a venture of their own.
The man who was fuelling the ambitions of his five other colleagues at that canteen was a 30-year-old engineer from Tamil Nadu, Shiv Nadar. And this is how the story of Hindustan Computers Limited, HCL began.
Nadar and his five colleagues quit DCM in the summer of 1976. They decided to set up a company that would make personal computers. They had gathered enough technical expertise at DCM's calculator division, but like for all start-ups, getting funds was the problem.
However, Nadar's passion for his new dream company and the support of his enthusiastic colleagues soon made the task very easy.
Founder, Chairman and CEO, HCL Technologies, Shiv Nadar told CNBC-TV18, "The first person I met was Arjun and he was also a management trainee like me. He was a couple of batches junior to me. . . We became very good friends and we are still very good friends. Then, the rest of them all worked for DCM and we all are of similar age, so we used to hang out together, crib together, have fun together, work together.
Nadar would first have to gather cash to give wings to his idea of manufacturing computers. He floated a company called Microcomp Limited -- through which he would sell teledigital calculators. This venture threw up enough cash to allow the founders to give shape to their ultimate dream to manufacture computers in India, at a time when computers were just sophisticated cousins of the good old calculator but support also came from the Uttar Pradesh government.
Finally, the founders put together Rs 20 lakh (Rs 2 million) and HCL was born.
The year after HCL was floated, the Indian government reigned in the ambitions of the foreign companies in India. This pronounced the death knell of companies like IBM and Coca-Cola while bells began to ring for Indian entrepreneurships like HCL.
Managing Editor, The Smart Manager, Dr Gita Piramal says, "Few Indian businessmen were happy when George Fernandes became industry minister in 1977, when the Janata Party came to power. Foreign businessmen were even less happy that Coca-Cola and IBM left India. IBM's leaving, left a major vacuum and this was the vacuum in which Shiv Nadar spotted an opportunity. He stepped in and customers began to trickle in."
HCL started shipping its in-house microcomputers around the same time as its American counterpart Apple, and took only two more years to introduce its 16 bits processor.
By 1983, it indigenously developed a relational data based management system, a networking operational system and client-server architecture, almost at the same time as its global peers. The road to the top was now in sight and HCL took it a step further by exploring foreign shores.
HCL's first brush with international business came about in 1979 when it set up a venture in Singapore; it was called Far East computers. HCL was only three years old and its net worth was around Rs 3 crore (Rs 30 million). Shiv Nadar set up an ambitious target for the venture and notched up sales of Rs 10 lakh (Rs 1 million) in the very first year.
Co-Founder, HCL Technologies, Ajai Chowdhry says, "We discovered that there was a good opportunity to enter Singapore with our own hardware we had manufactured in Singapore. But the strategy was very clearly around selling computerization rather than computers and so we actually took the whole idea of hardware, software solution and service and packaged it and presented it as computerization."
Even as it was basking in its success in Singapore, HCL planned a whole new area of expansion and it tapped into a territory that was lying unexplored in the country - computer education. Sensing the increasing demand for computer training, HCL set up NIIT [Get Quote] in 1981 to impart high quality IT education in India.
Nadar explains, "We knew many people in IIT and Indian Institute of Science. We formed an advisory panel and asked them, can you help us navigate this whole thing and they were very enthusiastic about this and they of course shaken up a little bit when they saw that we started advertising in Bombay -- selling education as a commercial project."
From calculators to IT education, the first five years of HCL was a combination of growth and expansion riddled with uncertainty but the company was now gearing up to set a much bigger target for itself and an announcement from the government would help it takeoff to those soaring heights.
In 1984, the Indian government announced a new policy that would change the fortunes of the entire computer industry. The government opened up the computer market and permitted import of technology. With new guidelines and regulations in place, HCL grabbed the opportunity to launch its own personal computer.
The demand for personal computers was slowly but surely mounting in the Indian market. Most banks were shifting to the UNIX platform. A few companies approached HCL for personal computers, so, the founders flew all over the world to bring back PCs they could take apart, study and reproduce and indigenously upgrade. Their first innovative personal computer was ready in three weeks' times and soon they launched their first range of computers, and they called it the Busybee.
Chowdhry says, "In a lot of ways, it opened up the market because one thing was that, you no longer had to develop basic stuff in India - like operating systems but on the other hand it opened new opportunities like banking because as per government policy, all banking computers must be UNIX based. So, feverishly we set out creating a UNIX based computer and we bought the UNIX source code and created that product out of nothing."
In two years, HCL became one of the largest IT companies in India. The founders now went to different corners of the country to set up sales and marketing offices and it now needed the brightest minds to take it to the next level of competition.
Campus recruitment in management and technical institutes began in full swing and HCL grabbed some of the best talent by offering pay packages that outscored some of the best companies of the time -- Rs 2,000 per month to start with.
The adrenaline rush of the first half of the 1980s and the rapid expansion strategy soon caught up with HCL. A turning point came in 1989, when HCL on the basis of a report by McKinsey and Company decided to venture into the American computer hardware market.
HCL America was born but the project fell flat on its face. HCL had failed to follow a very crucial step necessary to enter the US market. A big disappointment was on its way.
Piramal says, "For every entrepreneur, the US will always remain the dream market. It's the biggest market in the world and Shiv Nadar obviously was drawn to it but he really didn't know what he was getting into. The computers he made didn't get environmental clearances. In fact, HCL probably turned into his biggest mistake but HCL and Shiv himself, he is a very strong person, he understood he was making a mistake, he saw that Infosys [Get Quote] and Wipro [Get Quote] are doing really well in software and he was not too proud to change gears and finally HCL did enter the software market."
It didn't take too long for HCL to brush off the disappointment in the US. Its first failure in the US was set aside in 1991 and HCL entered into a partnership with HP (Hewlett-Packard) to form HCL HP Limited. It opened new avenues for HCL and gave opportunities to firm up its revenues.
In three years, another new possibility came knocking at its door and in 1994, HCL looked beyond PCs and tied up with Nokia cellphones and Ericsson switches for distribution.
Chowdhry explains, 'In 1991, when India didn't have enough foreign exchange. We were in the hardware business and we didn't have enough funds. That's the time when a clear thought entered our minds - that we should globalize and in the very early days, we actually created a joint venture with Hewlett-Packard.
In 1997, HCL was already a multi-dimensional company spun off HCL Technologies Limited to mark their entry into the global software space. It made up its mind to focus on software development, which was twenty years behind its entrepreneurial journey, Shiv Nadar was now ready to take on global competition with all his might.
From 70s to 90s, the HCL story was one of steady rise but in the face of its rapid expansion and continuous flow of achievements, Shiv Nadar didn't anticipate that he would be in for a rude shock and that it would come from someone very close.
In 1998, Arjun Malhotra, Shiv Nadar's comrade and friend decided to leave the company to start his own TechSpan, headquartered in Sunnyvale, California. He was also one of the largest shareholders in HCL Infosystems [Get Quote] at that time. For Shiv Nadar, it was time to think afresh.
The revenues were shrinking from the hardware sector and Nadar now decided to redesign HCL. The company once again needed funds to grow and this time around, Nadar decided to look at the capital market. An initial public offer (IPO) was made on the Indian Stock Exchange in 1999, which was a stupendous success.
President, HCL Technologies, Vineet Nayar says, "The shareholders supported us and then I think we started with Rs 580 an IPO and went up to Rs 2,800 or something like that. So, it was a dream run, I think the shareholders bought the argument we were making, they liked the articulation of the strategy, they liked the management team and they liked the vision we were painting and they supported the stock full time and that was a turning point for HCL."
Shiv Nadar now put aside his dream of becoming a global hardware major and venture into software with an open mind and a clean slate. Technology was opening up vistas of opportunities in the software sector and HCL now wanted to build new businesses.
Global business became a priority, so, now they started a BPO in Ireland in 2001. His partner in this ambitious venture was British Telecom.
The years that followed saw HCL in an expansion mode. In 2005 alone, HCL signed a software development agreement with Boeing for its 787 dreamliner programme. Next came a venture with NEC, Japan.
It even brought out the joint ventures Deutsche Bank and British Telecom's Apollo Contact Center. In the same year, HCL Infosystems launched it sub Rs 10,000 personal computer and joined hands with AMD and Microsoft to bridge the digital divide.
The successes of 2005 spilled over into 2006 and the company now produced over 75,000 machines in a single month, with more parallel joint ventures growing on its list. But in spite of this overwhelming success, Shiv Nadar would not rest. There was a nagging sense of dissatisfaction and perhaps not having exploited its full potential that still drove Nadar and the company to achieve much more.
Thirty years after starting his company, Shiv Nadar really does not have much to complain about. Hindustan Computers Ltd today is an empire worth $3.5 billion with staff strength of 34,000.
But then dissatisfaction has been the quintessential factor that has made Shiv Nadar the visionary that he was and continues to be. Dissatisfaction once drove him to quit his job at DCM and it is that same quality even today, that is driving him to achieve much more when he can quite easily rest on his laurels.

Tuesday, March 27, 2007

How to Improve Your Wi-Fi Network's Performance

By Becky Waring, PC World
How can I extend the range of my home Wi-Fi network?
First, make sure you are getting the most out of your current Wi-Fi router: Mount it in a central location in your house, preferably high on a wall; make sure that other 2.4-GHz devices such as cordless phones, baby monitors, wireless audio speakers, Bluetooth gadgets, and microwave ovens are not causing interference; and separate your router from your neighbors' router on the Wi-Fi spectrum. If they are using channel 1, for example, try channel 12 to minimize the chance of cross-channel interference.

If you still get a poor signal, consider upgrading to a router that incorporates MIMO (multiple-input, multiple-output) or draft-n technology. (See our latest review of these devices, "Wireless Routers: The Truth About Superfast Draft-N"). These routers not only provide far greater range than standard 802.11b/g routers, but they also boost speed by as much as ten times.

Finally, if you have particular Wi-Fi trouble spots in your house, such as odd corners, a basement, or an attic, power-line networking can be a great way to serve those areas. With power-line devices, you simply plug one adapter into a wall outlet and run an ethernet cord to your router; then you plug another adapter into an outlet near the device you want to connect to the network and run an ethernet cord to that device. You'll need reasonably clean power--free from excessive interference from other electrical devices--but the newest technologies, such as HomePlug AV and HD-PLC, work very well.

What's 802.11n? Do I need to upgrade my router?
Wi-Fi standards are continually evolving as technology advances. The first Wi-Fi routers were 802.11b, with a maximum of 11-megabits-per-second throughput. Next, 802.11g increased that to 54 mbps. Now, MIMO and draft-802.11n routers have pushed the wireless frontier to 280 mbps and beyond, rivaling wired ethernet. This year, the Wi-Fi Alliance will start certifying draft-802.11n routers. If you are in the market for a new router, definitely buy one of these models.

But if your old router provides satisfactory performance throughout your house, you needn't upgrade immediately. Your current equipment will operate just fine with 802.11n devices as they begin to appear. Wait to upgrade until you really need the added performance for bandwidth-intensive applications such as streaming video. Prices will only go down in the meantime.

How do I share a printer or game console over a Wi-Fi network?
For between $50 and $100, you can buy an adapter that will convert any device that has a wired ethernet port into a Wi-Fi-capable one. These Wi-Fi-to-ethernet bridges are available from companies like D-Link and Netgear, and are usually marketed as "wireless game adapters" for PlayStations, GameCubes, and Xboxes. But they work equally well with ethernet printers and network security cameras.

Often the adapters work right out of the box if your Wi-Fi net is configured to use DHCP, which enables dynamic IP addressing. If it's not, you can set up an adapter by connecting it to your PC and then assigning an IP address. Note that with some older game consoles, you must attach a networking adapter that equips them with an ethernet port before you can add the bridge. The Xbox 360 has a USB port, for which Microsoft sells a Wi-Fi adapter.

For printers without ethernet ports, you can buy a wireless print server, also available from companies like Belkin, D-Link, and Linksys. Be sure to choose a print server with ports (USB and/or parallel) that match your printers. Note, however, that multifunction devices usually lose all but their printing functions when networked this way.

Can I add a network hard drive to my Wi-Fi net?
There are two basic ways to add storage to your wireless network, but in either case, it's best to physically locate the drive(s) next to your router and connect them by wires rather than using a wireless adapter. Generally, you needn't put a network drive in a different room, and a wired connection is always faster and more reliable than wireless, especially if you have gigabit-ethernet equipment.

What you are really looking for is access to your network storage over your Wi-Fi net, which you can achieve by connecting any Network Attached Storage (NAS) device to one of your router's ethernet ports. Alternatively, you can buy a device like the Linksys Network Storage Link NSLU2, which connects two USB hard drives to any router via ethernet.

Can I use VoIP over Wi-Fi? What kind of quality will I get?
Voice over IP actually requires comparatively little bandwidth--under 100 kilobits per second per call--whereas network throughput is normally measured in megabytes per second. The problem with VoIP over Wi-Fi is more an issue of priorities: If someone else on the network is downloading large files from the Internet at the same time that you are making a call, choppiness and delays can occur.

Although the faster your router is, the fewer problems you should have using VoIP, most late-model wireless routers also incorporate a technology called 802.11e, or QoS (quality of service), that prioritizes streaming data ahead of regular data transfers. Be sure to get matching adapters that also support QoS, however.

How do I stream audio and video from one room to another via Wi-Fi?
Any audio or video that you can stream over a wired net, you can also stream via Wi-Fi. You just need to be sure that your Wi-Fi equipment's pipes are broad and fast enough to handle the data. For high-quality video, you'll probably need either 802.11e or a vendor's proprietary implementation of QoS enabled in both your router and adapters.

To stream your media, you'll also need some kind of streaming server, such as a Windows Media Center PC; an NAS drive with software like the open source SlimServer; or one of the many dedicated wireless streaming-media consoles, such as the D-Link MediaLounge Wireless HD Media Player or Roku SoundBridge M1001. See "Get More Out of Your Wireless Network" for more on wireless streaming.

Friday, March 16, 2007

Freedom from the office -- the Bedouin way

Leave the office behind -- forever

Mike Elgan

March 16, 2007 (Computerworld) -- San Francisco Chronicle journalist Dan Frost wrote a nice piece recently about local digital nomads he called Bay Area Bedouins. These are people who work for San Francisco start-up companies without offices, who roam from one coffeehouse to the next, working wherever they find a Wi-Fi connection. (Traditionally, a Bedouin is a desert-dwelling nomad who lives in a tent and moves around to find greener pastures for his camels, sheep and goats, bringing everything he needs with him.)

No matter who you are, you can embrace the new Bedouinism. You don't have to live in the Bay Area or the desert or work for a start-up. You don't even need access to a coffeehouse. It's easy, and I'll tell you how. But first, let me tell you why becoming a Bedouin can improve your life.

Boost your career

There are several ways Bedouinism can help your career. The most obvious one is that, when you carry your office with you, you'll be more responsive to colleagues and customers. Instead of replying to requests for a document with: "I'm on the road today, so I'll send it to you when I'm at my desk on Monday," you can reply with: "Here's the document."

the Bose Quiet Comfort headset The author busy at work on the Honduran island of Roatan during an islandwide blackout. This is why you need an extra battery.

A less obvious way the new Bedouinism can help you is that you can get closer to your business. For example, you can spend more time on the road visiting customers and attending more trade shows and other events that give you an edge. You can spend more time with suppliers and other business partners. You can do all this without a major penalty to your normal workload. You'll no longer do business the traditional way, in which you have two work modes: "in the office" and "on the road." Rather, you'll have only one work mode: "wherever I want to be and ready for anything."

You'll also be able to get work done at arbitrary times such as while shopping with your partner or standing in line at the DMV. In such situations, your brain is just sitting there doing nothing. You might as well whip out your phone and crank through some e-mails.

Take longer vacations

There's a lot of negative press these days about people who bring their work with them on vacations. And I agree. If you get only two or three weeks of vacation per year, you shouldn't spend that time working.

Bedouins take a different view. If you have the right kind of job, you can take vacations while you're "on the clock." In other words, you can travel for fun and adventure and keep on working. You can travel a lot more without needing more official vacation time.

I've done it. In August I took a monthlong vacation to Central America, backpacking from one Mayan ruin to the next, and I never officially took time off. I submitted my columns, provided reports and other input, participated in conference calls and interacted via e-mail. I used hotel Wi-Fi connections and local cybercafes to communicate and Skype to make business calls.

Nobody knew I was sunburned, drinking from a coconut and listening to howler monkeys as I replied to their e-mails.

Of course, this may be impossible in your line of work. But you can still be a part-time Bedouin and stretch vacations, taking small bits of time off that you otherwise couldn't.

Spend more time with friends and family

I don't advocate a workaholic lifestyle where you're taking calls constantly and never paying attention to the people in your personal life. But I do believe Bedouinism can get you out of your cubicle or office and into your home or wherever your family and friends are -- If you do it right.

Some critics slam the mobile lifestyle by saying that you never have any time off, that when e-mail comes in over the weekend, you are compelled to reply. But Bedouinism has no effect on the problem of workaholism -- it won't make you a workaholic if you aren't one now, and it won't cure workaholism, either. That's a separate issue (and a separate column). What Bedouinism does do is put you in control of where and when you work.

And remember: Giving yourself the ability to know when business e-mail and calls come in also tells you when they don't come in. My e-mail system sends all important e-mail to my BlackBerry Pearl, 24/7. (Here are details on my e-mail system.) So if my e-mail doesn't "ring" over the weekend, I can relax with the knowledge that nobody is waiting for me. If I do get mail over the weekend, I can choose to ignore it until Monday or, if it's important enough, reply immediately. In either case, I get maximum peace of mind. That's better than ignorance and the worry that some vital message is sitting there for days unread.

Have more fun

Here's the best reason to become a Bedouin: It's fun. Let's face it: offices suck. I'd much rather work at my dining room table, at the beach or -- what the heck -- at a San Francisco coffeehouse with the rest of my tribe. You spend one third of your waking life at work. Why spend a minute more than necessary in an environment where nobody wants to be?

The good news is that Bedouinism is cheaper and easier than ever before, thanks to myriad improvements in mobile hardware, software and services.

Here are my five steps to becoming a Bedouin:

1. Replace your desktop PC with a notebook. For less than $1,500, you can buy a desktop-replacement laptop with a 17-in. screen and tons of memory and storage. Get rid of your desktop computer forever and use one laptop for everything. If you crave the desktop experience, plug in peripherals like mouse, keyboard and giant monitor using a docking station.

People have their own preferences, but I wouldn't buy a laptop with anything smaller than a 17-inch screen. You'll be a lot happier and more productive than you would with a smaller screen. Buy an extra battery and look for hot-swappable components such as the ability to pull out the integrated CD/DVD drive and plug in the extra battery.

Every laptop has Wi-Fi these days, but make sure your new system has built-in Bluetooth, not the plug-in kind. You can use that Bluetooth connection for a variety of essential tasks such as using your cell phone as a modem and synchronizing your cell phone's data with the laptop.

Make sure you have extra protection for both your laptop and the add-on components. There are thousands of options available, from neoprene laptop covers to waterproof, padded briefcases. Find the option that best fits your style.
Aim is more connections per access point

Phil Hochmuth

March 16, 2007 (Network World) --

Foundry Networks Inc. this week launched new wireless LAN access points and controllers that can help users concentrate more connections per access point and stretch WLAN applications beyond simple data access.

With a new location management offering as well, Foundry said the new gear and software will help companies simplify WLAN deployment and management, and consolidate wireless data access with other services -- such as VoIP and location tracking -- on a single 802.11-based infrastructure.

Foundry's IronPort Mobility AP150 access point -- based on technology from Meru Networks -- can support as many as 120 WLAN connections per device, a useful feature for deployments in large public spaces or high-traffic areas. The IronPort Mobility Radio Switch 4000 is an even beefier WLAN access point, with built-in dual 802.11a and 802.11g radios, and support for as many as 256 connections per device. These products, combined with IronPort Wireless Location Manager 2.02 software, let users deploy such services as rogue-access-point detection and location, and WLAN-based employee or asset tracking.

The IronPort Mobility AP150 and IronPort Radio Switch 4000 provide multiple-radio coverage and the ability to deploy an entire WLAN with a single 802.11 channel and a single Secure Service Set Identifier network name. Foundry said this simplifies management and configuration for administrators.

The new IronPort gear also supports in-the-air quality of service (QoS) -- where the devices prioritize certain types of radio traffic between the client and the access point. Other WLAN equipment applies QoS settings to data or voice traffic only once packets hit the wired network at the access point, Foundry said.

The IronPort Wireless Location Manager 2.02 software now identifies the location of unauthorized access points -- for example, an access point set up by a user in a cubicle or at a desk -- as well as unauthorized WLAN clients in a building or campus. This service can be overlaid on top of an existing Foundry WLAN infrastructure and does not require additional access points dedicated to location tracking, the company says.

The Meru-based Foundry WLAN gear competes with products such as Cisco Systems Inc.'s Airespace-based WLAN equipment, as well as gear from Aruba Networks

Vista on a stick: How to flash your OS

This little trick can cut your work in half

Bill O'Brien
March 16, 2007 (Computerworld) -- In a world where there's too much to do -- and too little time to do it in -- we're always looking for shortcuts. So when we stumbled upon a blog entry by Kurt Shintaku over on Windows Live Spaces that promised to let us install Vista from a flash drive instead of an optical disc, there was certainly interest.

Why? Well, if we needed to install Vista on only one computer, it would be a case of "Who cares?" However, running down an aisle of 20 or 50 or 100 PCs with a flash drive in hand, pouring out data at 20MB/sec. – 25MB/sec. sure beats doing the same thing with a disc in hand and an optical drive pumping away at 16MB/sec. – 21MB/sec. Sure, it doesn't sound like much of a speed boost on paper, but when you start multiplying those small transfer rates by the length of each operation and then the number of repetitions, time can fly or it can crawl. The claim for the flash drive was that it soars, as much as 50% faster in some instances (assuming your PC's BIOS will let you boot from a USB device in the first place).

If that wasn't bait enough, fast 4GB flash drives aren't expensive, they can be recycled as Vista ReadyDrives when you're done, and best of all, the instructions for transferring our Vista disc to flash looked so easy a caveman could …, well you get the picture. There were only 10 steps:
diskpart
select disk 1
clean
create partition primary
select partition 1
active
format fs=fat32
assign
exit
xcopy d:\*.* /s/e/f e:\

All right, you've just had a panic attack. What the heck are those? They're command-line instructions. You need to start things off by clicking your way through Start/All Programs/Accessories/Command Prompt. It sets up a DOS (remember that?) command screen. "Diskpart" starts a scripting subroutine that lets you enter line commands (which are the next eight things in the list), after which you exit the subroutine and use xcopy to transfer the contents of the disc to flash. See? Simple.

All right, it would be if it worked, but try as we might -- and we did for hours and hours and hours of iterations -- it didn't. We could manually start the install from the flash drive from a computer that was already up and running, but it wouldn't boot -- and that's important when you're beginning with a blank PC.

The situation was very surprising because Ken Shintaku "works for Microsoft as a Principal Technology Specialist in Southern California." Then we noticed that Ken had stripped some of the front-end stuff from a colleague's blog and, in that way that technicians can often be careless with simple things, he forgot to mention something his colleague did: You need to do this from a Vista PC.

Again there's a why? Under XP, diskpart doesn't seem to recognize the flash device as a drive. It will display the device as a volume, but the remaining diskpart commands couldn't care less about that. Vista, on the other hand, recognizes the flash device as a drive. That's why we could transfer the contents of the Vista disc under XP, but we couldn't use the diskpart commands to make it a boot device. Needless to say, our first Vista install was from disc.
And it still didn't work! You know how it is when you pull a key out of your pocket and it won't unlock the lock even though you're positive it's the correct key. It might be the right key, but if so, it must be the wrong lock. Throwing caution to the wind, we reformatted the flash drive, this time under Vista, and tried again. Still nothing. The diskpart commands weren't working.

One more time into the breach. Formatted again, we took a look at the screen and saw that Vista recognized the drive as H: when there was no G: drive. It had skipped a letter. That shouldn't be a big thing. Diskpart commands work with disk numbers, not letters, and they're assigned consecutively irrespective of the letter assignment. Still, just in case, we took a quick trip into the Control Panel's Disk Administrative Tools application and changed the drive letter to G:.

Suddenly the skies cleared, the waters parted, and the commands worked flawlessly (almost -- Vista automatically reassigned the drive to H: after the "assign" command, and we needed to used that designation when we executed xcopy). The flash drive booted, Vista was installed, and, yes, it was faster than disc. Oh happy day!

But you're not quite done yet. Although the diskpart commands are very straightforward, they're also quite generic as shown. Let's take a last look at the list:

diskpart starts the "diskpart" scripting subroutine
select disk 1 focus all subsequent commands on a particular disk
clean clean all configuration information from the disk
create partition primary create a partition (of type)
select partition 1 move the focus to the partition just created
active mark the partition as an active boot partition
format fs=fat32 format the partition with a fat32 file system
assign assign a drive letter to the disk
exit exit diskpart
xcopy d:\*.* /s/e/f e:\ copy all files and directories from one device to another

After you've run the DOS command prompt screen and entered the diskpart command, you need to focus the rest of the subroutine commands on the disk you're about to work with by selecting it. It will probably not be "1" as shown. In fact, if you use the command as is, you'll destroy the contents of drive 1, whatever it might be.

To find out where your flash drive resides in the hierarchy, use the "list disk" command. (If you type "help" at the diskpart prompt you'll see a list of all available commands.) It will display each disk on your computer with its corresponding number. In our case, our Corsair Readout flash drive was shown as "3" so our select command was actually "select disk 3." From that point on, any command we issued within diskpart was used on disk 3 without needing to mention it specifically again.

The xcopy command is also device-specific. Our optical drive was actually F: and, as mentioned, the flash drive was H:. (After you exit diskpart and before you use xcopy, you can check with Vista to see what your drive assignments are. The DOS command prompt window will just cycle out of sight as you do, but you can select it again to bring it to the top). Our xcopy command, therefore, looked like this:

xcopy f:\*.* /s/e/f h:\

(If you're DOS savvy, you've probably realized that the /s and /e switches are contradictory. /s copies directories and subdirectories, but not empty ones, while /e copies directories and subdirectories, including empty ones. It didn't seem to cause a problem, so we let it be. The /f switch displays the full source and destination file names while copying is going on, and a file called install.wim, the actual installation image itself, will seem to take forever to get from disc to flash. Don't get anxious. Just sweat it out).

When xcopy has completed transferring files, close the command prompt window. That's it. You're done. You can boot from the flash drive and do all your installations from there. Now you just have to figure out what you'll be doing with your free time… (And should you want to use your flash drive as a ReadyDrive when all your installs are completed, you'll have to reformat it so it's blank and, once connected to your Vista computer, right-click its icon and set it to work as one from its Properties box).

Bill O’Brien has written a half-dozen books on computers and technology. He has also written articles on topics ranging from Apple computers to PCs to Linux to commentary on IT hardware decisions.

Saturday, March 10, 2007

300

a movie about an ancient war, done entirely in 3D, CGI, and other hi tech computing wizardry

watch














a little about the film

filming done in 60 days flat...
30 top end animation firms hired.
all sets, scenery, done on computer software, only actors real.
extensive use of blue screen...

enjoy

Friday, March 02, 2007

Data Encryption Techniques

Introduction

Often there has been a need to protect information from 'prying eyes'. In the electronic age, information that could otherwise benefit or educate a group or individual can also be used against such groups or individuals. Industrial espionage among highly competitive businesses often requires that extensive security measures be put into place. And, those who wish to exercise their personal freedom, outside of the oppressive nature of governments, may also wish to encrypt certain information to avoid suffering the penalties of going against the wishes of those who attempt to control.

Still, the methods of data encryption and decryption are relatively straightforward, and easily mastered. I[1] have been doing data encryption since my college days, when I used an encryption algorithm to store game programs and system information files on the university mini-computer, safe from 'prying eyes'. These were files that raised eyebrows amongst those who did not approve of such things, but were harmless [we were always careful NOT to run our games while people were trying to get work done on the machine]. I was occasionally asked what this "rather large file" contained, and I once demonstrated the program that accessed it, but you needed a password to get to 'certain files' nonetheless. And, some files needed a separate encryption program to decipher them.

Methods of Encrypting Data

Traditionally, several methods can be used to encrypt data streams, all of which can easily be implemented through software, but not so easily decrypted when either the original or its encrypted data stream are unavailable. (When both source and encrypted data are available, code-breaking becomes much simpler, though it is not necessarily easy). The best encryption methods have little effect on system performance, and may contain other benefits (such as data compression) built in. The well-known 'PKZIP®' utility offers both compression AND data encryption in this manner. Also DBMS packages have often included some kind of encryption scheme so that a standard 'file copy' cannot be used to read sensitive information that might otherwise require some kind of password to access. They also need 'high performance' methods to encode and decode the data.

Ways of encrypting data


1. The 'translation table', meets this need very well. Each 'chunk' of data (usually 1 byte) is used as an offset within a 'translation table', and the resulting 'translated' value from within the table is then written into the output stream. The encryption and decryption programs would each use a table that translates to and from the encrypted data. Further, such a method is relatively straightforward for code breakers to decipher - such code methods have been used for years, even before the advent of the computer. Still, for general "unreadability" of encoded data, without adverse effects on performance, the 'translation table' method lends itself well.

a. A modification to the 'translation table' uses 2 or more tables, based on the position of the bytes within the data stream, or on the data stream itself. Decoding becomes more complex, since you have to reverse the same process reliably. An example of this method might use translation table 'A' on all of the 'even' bytes, and translation table 'B' on all of the 'odd' bytes. Unless a potential code breaker knows that there are exactly 2 tables, even with both source and encrypted data available the deciphering process is relatively difficult.

b. Similar to using a translation table, 'data repositioning' lends itself to use by a computer, but takes considerably more time to accomplish. A buffer of data is read from the input, then the order of the bytes (or other 'chunk' size) is rearranged, and written 'out of order'. The decryption program then reads this back in, and puts them back 'in order'. Often such a method is best used in combination with one or more of the other encryption methods mentioned here, making it even more difficult for code breakers to determine how to decipher your encrypted data. The most common examples are anagrams. Some anagrams are easier than others to decipher, but a well written anagram is a brain teaser nonetheless, especially if it's intentionally misleading.

2. My favorite methods, however, involve something that only computers can do: word/byte rotation and XOR bit masking. If you rotate the words or bytes within a data stream, using multiple and variable direction and duration of rotation, in an easily reproducable pattern, you can quickly encode a stream of data with a method that is nearly impossible to break. In some cases, you may want to detect whether data has been tampered with, and encrypt some kind of 'checksum' into the data stream itself. This is useful not only for authorization codes but for programs themselves.

a. A virus that infects such a 'protected' program would no doubt neglect the encryption algorithm and authorization/checksum signature. A cyclic redundancy check is one typically used checksum method. It uses bit rotation and an XOR mask to generate a 16-bit or 32-bit value for a data stream, such that one missing bit or 2 interchanged bits are more or less guaranteed to cause a 'checksum error'. The method is somewhat well documented, and standard. But, a deviation from the standard CRC method might be useful for the purpose of detecting a problem in an encrypted data stream, or within a program file that checks itself for viruses.

3. Key-Based Encryption Algorithms:

a. One very important feature of a good encryption scheme is the ability to specify a 'key' or 'password' of some kind, and have the encryption method alter itself such that each 'key' or 'password' produces a different encrypted output, which requires a unique 'key' or 'password' to decrypt. This can either be a 'symmetrical' key (both encrypt and decrypt use the same key) or 'asymmetrical' (encrypt and decrypt keys are different). The popular 'PGP' public key encryption, and the 'RSA' encryption that it's based on, uses an 'asymmetrical' key. The encryption key, the 'public key', is significantly different from the decryption key, the 'private key', such that attempting to derive the private key from the public key involves many hours of computing time, making it impractical at best.

b. . In nearly all cases, if an operation is performed on 'a', resulting in 'b', you can perform an equivalent operation on 'b' to get 'a'.

c. In the case of the RSA encryption algorithm, it uses very large prime numbers to generate the public key and the private key. Although it would be possible to factor out the public key to get the private key (a trivial matter once the 2 prime factors are known), the numbers are so large as to make it very impractical to do so.

d. What PGP does (and most other RSA-based encryption schemes do) is encrypt a symmetrical key using the public key, then the remainder of the data is encrypted with a faster algorithm using the symmetrical key. The symmetrical itself key is randomly generated, so that the only way to get it would be by using the private key to decrypt the RSA-encrypted symmetrical key.

e. Example: Suppose you want to encrypt data (let's say this page) with a key of 12345. Using your public key, you RSA-encrypt the 12345, and put that at the front of the data stream (possibly followed by a marker or preceded by a data length to distinguish it from the rest of the data). THEN, you follow the 'encrypted key' data with the encrypted page text, encrypted using your favorite method and the key '12345'. Upon receipt, the decrypt program looks for (and finds) the encrypted key, uses the 'private key' to decrypt it, and gets back the '12345'. It then locates the beginning of the encrypted data stream, and applies the key '12345' to decrypt the data. The result: a very well protected data stream that is reliably and efficiently encrypted, transmitted, and decrypted. [2]

4. A brand new 'multi-phase' method (invented by ME)

a. I have (somewhat) recently developed and tested an encryption method that is (in my opinion) nearly uncrackable. The reasons why will be pretty obvious when you take a look at the method itself. I shall explain it in prose, primarily to avoid any chance of prosecution by those 'GUMMINT' authorities who think that they oughta be able to snoop on anyone they wish, having a 'back door' to any encryption scheme, etc. Well, if I make the METHOD public, they should have the same chance as ANYONE ELSE for decrypting things that use this method.

i. Using a set of numbers (let's say a 128-bit key, or 256-bit key if you use 64-bit integers), generate a repeatable but highly randomized pseudo-random number sequence (see below for an example of a pseudo-random number generator).

ii. 256 entries at a time, use the random number sequence to generate arrays of "cipher translation tables" as follows:

1. fill an array of integers with 256 random numbers (see below)

2. Sort the numbers using a method (like pointers) that lets you know the original position of the corresponding number

3. Using the original positions of the now-sorted integers, generate a table of randomly sorted numbers between 0 and 255. If you can't figure out how to make this work, you could give up now... but on a kinder note, I've supplied some source below to show how this might be done - generically, of course.

4. Now, generate a specific number of 256-byte tables. Let the random number generator continue "in sequence" for all of these tables, so that each table is different.

5. Next, use a "shotgun technique" to generate "de-crypt" cipher tables. Basically, if a maps to b, then b must map to a. So, b[a[n]] = n. get it? ('n' is a value between 0 and 255). Assign these values in a loop, with a set of 256-byte 'decrypt' tables that correspond to the 256-byte 'encrypt' tables you generated in the preceding step. NOTE: I first tried this on a P5 133Mhz machine, and it took 1 second to generate the 2 256x256 tables (128kb total). With this method, I inserted additional randomized 'table order', so that the order in which I created the 256-byte tables were part of a 2nd pseudo-random sequence, fed by 2 additional 16-bit keys.

6. Now that you have the translation tables, the basic cipher works like this: the previous byte's encrypted value is the index of the 256-byte translation table. Alternately, for improved encryption, you can use more than one byte, and either use a 'checksum' or a CRC algorithm to generate the index byte. You can then 'mod' it with the # of tables if you use less than 256 256-byte tables. Assuming the table is a 256x256 array, it would look like this:
crypto1 = a[crypto0][value] where 'crypto1' is the encrypted byte, and 'crypto0' is the previous byte's encrypted value (or a function of several previous values). Naturally, the 1st byte will need a "seed", which must be known. This may increase the total cipher size by an additional 8 bits if you use 256x256 tables. Or, you can use the key you generated the random list with, perhaps taking the CRC of it, or using it as a "lead in" encrypted byte stream. Incidentally, I have tested this method using 16 'preceding' bytes to generate the table index, starting with the 128-bit key as the initial seed of '16 previous bytes'. I was then able to encrypt about 100kbytes per second with this algorithm, after the initial time delay in creating the table.

7. On the decrypt, you do the same thing. Just make sure you use 'encrypted' values as your table index both times. Or, use 'decrypted' values if you'd rather. They must, of course, match.

5. However, if you're at a loss for a random sequence consider a FIBBONACCI sequence, using 2 DWORD's (like from your encryption key) as "seed" numbers, and possibly a 3rd DWORD as an 'XOR' mask. An algorithm for generating a random sequence of numbers, not necessarily connected with encrypting data, might look as follows:

  unsigned long dw1, dw2, dw3, dwMask;
int i1;
unsigned long aRandom[256]
dw1 = {seed #1};
dw2 = {seed #2};
dwMask = {seed #3};
// this gives you 3 32-bit "seeds", or 96 bits total
  for(i1=0; i1 < style="">
{
dw3 = (dw1 + dw2) ^ dwMask;
aRandom[i1] = dw3;
dw1 = dw2;
dw2 = dw3;
}

If you wanted to generate a list of random sequence numbers, let's say between zero and the total number of random numbers in the list, you could try something like THIS:
int __cdecl MySortProc(void *p1, void *p2)
{
  unsigned long **pp1 = (unsigned long **)p1;
  unsigned long **pp2 = (unsigned long **)p2;
  if(**pp1 < **pp2)
    return(-1);
  else if(**pp1 > *pp2)
    return(1);
   return(0);
}
...
  int i1;
  unsigned long *apRandom[256];
  unsigned long aRandom[256];  // same array as before, in this case
  int aResult[256];  // results go here
  for(i1=0; i1 <>
  {
    apRandom[i1] = aRandom + i1;
  }

// now sort it
  qsort(apRandom, 256, sizeof(*apRandom), MySortProc);
// final step - offsets for pointers are placed into output array
  for(i1=0; i1 <>
  {
    aResult[i1] = (int)(apRandom[i1] - aRandom);
  }
...

The result in 'aResult' should be a randomly sorted (but unique) array of integers with values between 0 and 255, inclusive. Such an array could be useful, for example, as a byte for byte translation table, one that could easily and reliably be reproduced based solely upon a short length key (in this case, the random number generator seed); however, in the spirit of the 'GUTLESS DISCLAIMER' (below), such a table could also have other uses, perhaps as a random character or object positioner for a game program, or as a letter scrambler for an anagram generator.

GUTLESS DISCLAIMER: The sample code above does not in and of itself constitute an encryption algorithm, or necessarily represent a component of one. It is provided solely for the purpose of explaining some of the more obscure concepts discussed in prose within this document. Any other use is neither proscribed nor encouraged by the author of this document, S.F.T. Inc., or any individual or organization that is even remotely connected with this web site.


[1] The author is a employee of Cisco, and the following extract is taken from his recollections.