NASA offers $400,000 prize for super space glove

If you can build a high-tech glove that can move easily and operate effectively in the vacuum of space, NASA may have $400,000 for your effort. NetworkWorld Extra: 12 mad science projects that could shake the world NASA said the competition will test gloves from at least two contestants that will measure the gloves' dexterity and strength during operation in a glove box that simulates the vacuum of space. That's the amount of money up for grabs in the 2009 Astronaut Glove Challenge set for Nov. 19 at the Astronaut Hall of Fame in Titusville, Fla. According to the competition Web site, the challenge will be conducted by Volanz Aerospace in a format that brings all competitors to a single location for a "head to head" competition to determine the winning Team(s). Each team will be required pass a series of minimum performance requirements having to do with the glove's interface with the interface to the test box, flexibility, dexterity and pressurization.

The team(s) that earns the highest score will be the winner. Other requirements include: the weight of the outer or thermal micrometeoroid garment (TMG) layer of the glove must not exceed 200 grams; and the TMG must be able to withstand a temperature range from -120 degrees Celsius (-185 F) to +113 degrees Celsius (235 F). Performance tests include range-of-motion and the ability of the operator to push and pull items as well as manipulate them. From the Web site: For this test, conducted in the glove box, the Competitor will insert the full Glove, consisting of the TMG layer, outer glove unpressurized layer, and the unpowered, bladder and bladder-restraint portion of the Glove into the Glove Box. The glove challenge is but one of NASA's Centennial Challenges that offers top dollar rewards for a variety of innovative technologies. The Competitor will perform 30 minutes of hand exercises (e.g., pinching and gripping), and other manipulation dexterity tests and tasks that will be scored based on performance. For example, NASA recently awarded $1.65 million in prize money to a pair of aerospace companies that successfully simulated landing a spacecraft on the moon and lifting off again.

NASA recently held and awarded a $900,000 prize in its Power Beaming and Tether Challenge to develop future solar power satellites and a futuristic project known as the Space Elevator. NASA gave a $1 million first prize to Masten Space Systems and a $500,000 second prize to Armadillo Aerospace for successfully completing the Northrop Grumman Lunar Lander Challenge. Space elevators are in a nutshell stationary tethers rotating with the Earth, held up by a weight at its end, and serving as a track on which electric vehicles called "climbers" can travel up and down carrying about 10 tons of payload, according to The Spaceward Foundation which is working with NASA on these challenges.

Cheap SANs boast high-end features

This is a good time to be buying a midrange storage-area network. Synology offers iSCSI on the cheap Archive of Network World tests InfoWorld's companion review Storage systems typically consist of three logical parts that may be in the same box or in different ones - a chassis that holds disks, a physical controller that interfaces the disks with the storage protocol, and a software management system. Gigabytes per dollar is dropping, throughput per dollar is increasing and affordable systems are delivering sophisticated features that used to be reserved for the high end of the market.

There are plenty of options: the chassis may support 12 to 48 SATA, SAS, Fibre Channel or SCSI disks, the controllers include hardware RAID at many levels, and vendors may support Fibre Channel, Ethernet or both. The high-end features that used to be the domain of expensive Fibre Channel systems are now available in the midrange systems we tested. We looked at four systems that represent a cross-section of the mid-range SAN market today: • Compellent Storage Center 4.0, the highest-priced system of the group at just over $68,000. Compellent supports both Fibre Channel and Ethernet. • Dell/EqualLogic PS4000, an iSCSI-only product at $17,000. • HP StorageWorks 2000sa G2 Modular Smart Array, a Fibre Channel only product at $13,000. • And a build-it-yourself combination of Promise Technology vTrak E610f with Datacore's SANMelody 3.0 storage software, a Fibre Channel and Ethernet system that runs less than $10,000. You can expect some things from virtually any SAN system these days, including easy setup and quick integration into your existing data center. In fact, the Promise/Datacore system, the least-expensive product we tested, not only had all the features of higher-end products, it includes integration with VMware and some other features that the more expensive products have on their road maps, but not on shipping products. Peace of mind, primarily.

So why buy the more expensive products? For example, the Promise/DataCore system has no integrated support or warranty - you buy the system, some disks, a server to run the DataCore software on, and the assorted infrastructure parts (switch, cabling), and put it together yourself. If you're not sure what you're doing, it's easy to get SATA drives that really shouldn't be used in a RAID array, for instance. If you can't get it to work as expected, each of the six or more vendors involved is likely to point fingers at the others. And optimizing performance is not necessarily a simple thing - buying a complete system gives you components designed to work together.

In addition, each vendor offers a wide range of configurations that can produce systems from basic and inexpensive to very high performance and feature rich. The systems we tested are not direct competitors; they are all aimed at different parts of the storage market. This test demonstrates the wide variety of functionality and performance available. The other vendors also offer a similarly wide array of options in the price/performance spectrum. For example, if you're looking for a very low cost iSCSI system, an EqualLogic system can be purchased with eight drives for around $10,000, while a high-end EqualLogic system can run $100,000 or more. The rich feature sets are not simply about storage redundancy or high performance - features such as automatic snapshots and synchronous or asynchronous replication can be used to replace add-on (and expensive) functions in other products, such as VMware or Microsoft's HyperV virtualization software, for instance.

Similarly, if you're using VMware for prototyping, testing or provisioning large numbers of servers for internal or external users, the ability to clone a VM volume and mount the clone as a new VM in a matter of seconds makes provisioning a snap. With these midrange SANs, you can easily replicate data from working VMs to a backup location and quickly get the backup site running if a disaster occurs, without a per-server license fee. Choosing a storage system is not a simple matter. Thus, it's important to look at not only the requirements you have now, but the ability of the system to grow with your needs. Requirements have a way of evolving, not only in terms of the amount of storage required, but in terms of additional features for adding more performance, disaster recovery or high availability options to existing applications, or expanding from a test bed to an enterprise-wide system. Harbaugh is a freelance reviewer and IT consultant in Redding, Calif.

He can be reached at logan@lharba.com. He has been working in IT for almost 20 years, and has written two books on networking, as well as articles for most of the major computer publications.

How a Botnet Gets Its Name

There is a new kid in town in the world of botnets - isn't there always? When a botnet like Festi pops onto the radar screen of security researchers, it not only poses the question of what is it doing and how much damage it can cause; there is also the issue of what to call it. A heavyweight spamming botnet known as Festi has only been tracked by researchers with Message Labs Intelligence since August, but is already responsible for approximately 5 percent of all global spam (around 2.5 billion spam emails per day), according to Paul Wood, senior analyst with Messagelabs, which keeps tabs on spam and botnet activity. For all of their prevalence and power online, when it comes to naming botnets, there is no real system in place.

Wood explained Festi's history. "The name came from Microsoft; they identified the malware behind it and gave it the catchiest name," said Wood. "Usually, a number of companies will identify the botnet at the same time and give it a name based on the botnet's characteristics. A common practice so far has been to name it after the malware associated with it; this is a practice that has some drawbacks. Its original name was backdoor.winnt/festi.a or backdoor.trojan. Usually the name and convention comes from wording found within the actual software itself and that is used in some way. Backdoor droppers are common and that wouldn't stick, it would be too generic.

This one may have been related to a word like festival." Because the security industry lacks a uniform way to title botnets, the result is sometimes a long list of names for the same botnet that are used by different antivirus vendors and that can be confusing to customers. The Srizbi botnet is also called Cbeplay and Exchanger. As it stands now, the infamous Conficker is also known as Downup, Downadup and Kido. Kracken is also the botnet Bobax. For instance Gumblar, a large botnet that made news earlier this year (and is possibly perking up again), first hit the gumblar.cn domain, said DiMino.

Why they are called what they are called is up to the individual researchers who first identified them. "A lot of time it depends on the first time we see bot in action and what it does," according to Andre DiMino, director of Shadowserver Foundation, a volunteer group of cybercrime busters who, in their free time, are dedicated to finding and stopping malicious activity such as botnets. Another known as Avalanche was deemed so because of what DiMino described as a preponderance of domain names being used by the botnet. Over the years naming for malware has had a few ground rules. "Don't name anything after the author," he said. "That was most important back when viruses were written for fame." Weafer whipped off a few botnet names that have made headlines in recent years and did his best to recall how they got their titles. The naming dilemma can be a difficult one to tackle according to Vincent Weafer, vice president of Symantec's security response division. Among the more notable, he said, is Conficker, which is thought to be a combination of the English word configure and the German word ficker, which is obscene. Kracken is named after a legendary sea monster.

The Storm botnet was named after a famous European storm and the associated spam that was going around related to it. And MegaD, a large spambot, got its name because it is known for spam that pushes Viagra and various male enhancement herbal remedies. "You can guess what the D stands for after Mega," he said. Because botnets morph and change so frequently, he said, they rarely continue to have a meaningful association with the original malware sample that prompted researchers to name it in the first place. "Botmasters don't restrict themselves to a single piece of malware," said Ollmann "They use multiple tools to generate multiple families of malware. Gunter Ollmann, VP of research with security firm Damballa, believes it is time for a systematic approach to naming botnets that vendors can agree upon. To call a particular a botnet after one piece of malware is naïve and doesn't really encompass what the actual threat is." Also see Botnets: 4 Reasons It's Getting Harder to Find Them and Fight Them Ollmann also adds that the vast majority of malware has no real humanized name, and is seen simply as digits, which makes naming impossible.

The most recent iteration of the discussion focused on how to transport the meta-data that describes the particular name threat of the malware. The result is a confusing landscape for enterprise customers who may be trying to clean up a mess made by a virulent worm, only to find various vendors using different names for the same problem. "There is some work going on among AV vendors to come up with naming convention for the malware sites, but this is independent of the botnets," said Ollmann. "This has been going on for several years now. But there has been no visible progress the end user can make use of." Ollmann said Damballa is now using a botnet naming system, with the agreement of customers, which favors a two-part name and works much like the hurricane naming system used by the National Weather Service. Once a botnet is identified, the name is used and crossed it off the list. The first part of the name comes from a list of pre-agreed upon names.

It becomes the name forever associated with that botnet. While the botnet master changes their malware on a daily basis, they usually only change their malware family balance on a two-or-three day basis, said Ollmann. The second part of the name tracks the most common piece of malware that is currently associated with the botnet. The second part of the name then changes to in order to reflect that fluctuation. "So many of these are appearing it just becomes a case of assigning a human readable name and no other name associated with it," said Ollmann. "It is perhaps ungracious to name them with a hurricane naming system, but it speaks perhaps to the nature of this threat."

Japan plans 3D broadcasts as Sony preps 3D truck

Television broadcasting in 3D took a couple of steps further towards reality on Wednesday when Japan joined the list of countries planning to launch the new format and Sony said it will soon deliver a truck equipped to broadcast 3D programming on the go. The announcement adds Japan to the list of countries planning to launch 3D broadcasting this year at about the same time compatible televisions begin appearing on the market. Japan's Sky Perfect Communications plans to launch 3D broadcasting in the middle of this year with two to three programs per month, it said Wednesday.

Earlier this month ESPN, Discovery Channel and Sony said they would launch 3D channels in the U.S. during 2010. British Sky Broadcasting (BSkyB) is also planning to launch a 3D channel for viewers in the U.K. and Ireland and South Korea's SkyLife is also testing the format. Sports, movies and gaming are the genres through which the industry expects 3D will become popular in the living room so most of the early TV content will be focused on sports and movies. Other broadcasters such as Australia's Foxtel are yet to announce firm plans but have expressed interest in 3D programming. Hollywood is taking care of the movies with titles like "Avatar," which has become the highest grossing movie of all time with a box office total of US$1.9 billion to date. Programmers might get an early taste of the appetite for 3D sports in June when the World Cup kicks off in South Africa. Movie-studio Fox said over 70 percent of sales came from 3D showings of the movie.

Sony plans to broadcast up to 25 of the games in 3D and will produce and distribute a 3D film of the World Cup after the event. The latest order is for a truck to be delivered in April to BSkyB for use covering sports and other live events. On Wednesday Sony said its professional broadcasting unit had won a second order for a 3D-capable outside broadcasting truck. Earlier this month Sony said it would supply a similar truck to U.S.-based producer All Mobile Video. The first Blu-ray Disc players capable of 3D are also due out this year and Sony has promised to provide a software upgrade to its PlayStation 3 that will add 3D gaming capability.

Several major consumer electronics companies including Sony and Panasonic have said they plan to launch 3D-compatible television sets during 2010. The sets will be able to handle conventional 2D programming and be switched into 3D mode when suitable programs are transmitted.

Harvard study: Computers don't save hospitals money

A Harvard Medical School study that looked at some of the nation's "most wired" hospital facilities found that computerization of those facilities hasn't saved them any money or improved administrative efficiency. And much of the software being written for use in clinics is aimed at administrators, not doctors, nurses and lab workers. The recently released study evaluated data on 4,000 hospitals in the U.S over a four-year period and found that the immense cost of installing and running hospital IT systems is greater than any expected cost savings.

The study comes as the federal government prepares to begin dispensing $19 billion in incentives for the health industry to roll out electronic health records systems. The problem "is mainly that computer systems are built for the accountants and managers and not built to help doctors, nurses and patients," the report's lead author, Dr. David Himmelstein, said in an interview with Computerworld . Himmelstein, an associate professor at Harvard Medical School, said that in its current state, hospital computing might modestly improve the quality of health care processes, but it does not reduce overall administrative costs. "First, you spend $25 million dollars on the system itself and hire anywhere from a couple-dozen to a thousand people to run the system," he said. "And for doctors, generally, it increases time they spend [inputting data]." Himmelstein said that only a handful of hospitals and clinics realized even modest savings and increased efficiency - and those hospitals custom-built their systems after computer system architects conducted months of research. Beginning in 2011, the Health Information Technology for Economic and Clinical Health (HITECH) Act will provide incentive payments of up to $64,000 for each physician who deploys an electronic health records system and uses it effectively. He pointed to Brigham and Women's Hospital in Boston, Latter Day Saints Hospital in Salt Lake City and Regenstrief Institute in Indianapolis as facilities with some success in deploying efficient e-health systems. Programmers of the successful systems told Himmelstein that they didn't write manuals or offer training. "If you need a manual, then the system doesn't work. That's because they were intuitive and aimed at clinicians, not administrators.

If you need training, the system doesn't work," he said. Even hospitals on the "most wired" list "performed no better than others on quality, costs, or administrative costs," the study found. While many health care experts believe that computerization will improve quality of care, reduce costs and increase administrative efficiency, the Harvard Medical School report notes that no earlier studies closely examined computerization's cost or its effect on a diverse sample of hospitals. Himmelstein and his team of researchers pored over data on computerization at approximately 4,000 hospitals between 2003 and 2007 from the Healthcare Information and Management Systems Society, along with administrative cost data from Medicare Cost Reports and cost and quality data from the 2008 Dartmouth Health Atlas. He pointed to ads by IBM and Lockheed Corp. from the 1960s and 1970s touting computerization as a way to reduce paperwork and improve health care. Himmelstein, who was once the director of clinical computing at Cambridge Hospital in Massachusetts, wrote that the misconception that computerization brings cost savings in hospitals is not new.

In the 1990s, experts also espoused the benefits of computerized patient records, saying they would be adopted quickly and yield huge administrative savings. Today, the federal government's health information technology Web site proclaims that the "broad use of health IT will: improve health care quality; prevent medical errors; reduce health care costs; increase administrative efficiencies; decrease paperwork; and expand access to affordable care." "Unfortunately," Himmelstein's report reads, "these attractive claims rest on scant data. In 2005, one analyst group projected annual savings of $77.8 billion through computerization; another predicted more than $81 billion in savings, as well as a big improvement in health. A 2006 report prepared for the Agency for Healthcare Research and Quality, as well an exhaustive systematic review, found some evidence for cost and quality benefits of computerization at a few institutions, but little evidence of generalizability. Brailer, now chairman of Health Evolution Partners, a San Francisco-based investment firm that specializes in funding health care providers, headed the Office of the National Coordinator for Health Information Technology from 2004 until 2006. Implementing e-health records nationwide would cost between $75 billion and $100 billion, Brailer said, adding that individual hospitals "will have to make sizable, potentially multi-hundred-million-dollar budget commitments." Still, he said a fully functioning national electronic health system could reduce U.S. health care costs by $200 billion to $300 billion annually by cutting down on duplicate records, reducing record-keeping errors, avoiding fraudulent claims and better coordinating health care among providers. Recent Congressional Budget Office reviews have been equally skeptical, citing the slim and inconsistent evidence base." David Brailer, who served as the nation's first health information czar under President George W. Bush, noted in an interview with Computerworldearlier this year that 25% to 35% of the nation's 5,000 hospitals use or are in the process of rolling out computerized order-entry and medical records systems.

Himmelstein called those claims "unsupported." "For 45 years or so, people have been claiming computers are going to save vast amounts of money and that the payoff was just around the corner," he said. "So the first thing we need to do is stop claiming things there's no evidence for. It's based on vaporware and [hasn't been] shown to exist or shown to be true."

ICANN OKs International Domains: The Pros and Cons

ICANN's approval of non-Latin character domains undoubtedly is a game-changing decision in the history of the World Wide Web. Here are a few pros and cons to consider as we move away from the traditional ASCII based-Web. With scheduled to start popping up in the middle of next year, many people are debating if this digital support for more distinctly international sites balances with potential security threats and fragmentation of the Internet. Pro: World Wide Web Supporting World Wide Language Let's face it; millions of Internet users speak languages that aren't written using Roman characters.

The transition will begin on November 16 when countries can apply for country codes in their own unique character sets. "The first countries that participate will not only be providing valuable information of the operation of IDNs in the domain name system, they are also going to help to bring the first of billions more people online - people who never use Roman characters in their daily lives," ICANN CEO and President Rod Beckstrom said in a statement. Allowing Web sites to have domains that use other characters will make Web addresses more recognizable to some and make the Web more accessible to millions of new users. Con: Country Codes are Only the Beginning Generic domains such as .com, .org and .net aren't open to international characters yet, but could be in the next couple of years. Pro: Country Codes are Only the Beginning If done properly, opening generic domains to international characters could be a good thing. If ICANN decides to open generic domains without extending rights to existing URL holders, international companies and brands might find themselves purchasing URLs in multiple languages to protect the use of their name, points out PC World Tech Inciter writer Tech Inciter David Coursey.

If International corporations were granted rights to the .com URLs they already possess it could spell an end to selecting a region before entering the site. It would also open doors for smaller Web sites that are just interested in serving a particular language group. For instance, going to intel.com could lead to the English version of the site, while using a Japanese, Russian, or Korean suffix would take you to a version of the site with that language. Con: A lesson from 1337 h4ck3r$ Expanding beyond Roman characters also increases potential for site rip-offs that use homoglyphs, characters with identical or indistinguishable shapes. Con and Pro: No Latin Base Emphasis Apparently homoglyphs are drawing some attention at ICANN. Languages that use accented Latin characters aren't being supported at this time, The CBC Reports.

This already occurs to some degree (for instance pointing your browser to google.com takes you to a different site than go0gle.com) but different languages might have characters that are identical to characters in other languages. They attribute the lack of support to security concerns that accented characters could lead to phishing scams because, "internet users might not at first see the difference between, for example, 'google.com' and 'goógle.com.'" This is bad news for French, Spanish, Turkish, and Vietnamese speakers - all four languages use accented characters. As fellow PC World writer Jacqueline Emigh pointed out, it would be next to impossible to produce a keyboard that could support characters from every language under the sun. But, if ICANN is aware of security concerns that would arise from including these languages, maybe they have some sort of anti-homoglyph trick up their sleeve for other languages, (here's looking at you, Cyrillic.) Con: Keyboards and Restrictive Access Adding support for 100,000 international characters would make traditional keyboards insufficient input devices for accessing the entire Internet. Virtual keyboards and language packs could possibly help alleviate the problem for some people, but there wouldn't be an easy fix.

ICANN released this video with its announcement, hoping to encapsulate the potential for opening up international character domains.

Malware messes up India's online test for business schools

The move by India's top business schools to take their CAT entrance test online turned embarrassing after malware-infected computers left a number of students unable to take the test. It said on the CAT Web site that it has decided to reschedule the tests for the affected students. Prometric, a Baltimore, Maryland, testing company hired to conduct the CAT (Common Admission Test), said this week that the testing labs faced technical difficulties mainly due to malware and viruses. Over 240,000 candidates registered for the CAT 2009, which was scheduled to run from Nov. 28 to Dec. 7. While the written test was held on a single day in previous years, the online test this year was spread over 10 days, giving candidates the option to choose a date and center for the test.

The tests are continuing after the initial disruption. Prometric was to conduct the tests across labs in 32 cities in the country. But on the first day of the test, computer viruses and malware prevented 47 testing labs from delivering the test to candidates as scheduled. The IIMs have been set up by the government, but the institutes run as autonomous organizations. The Indian Institutes of Management (IIMs) are the top management training schools in the country, and some of their alumni occupy key positions in companies both in India and abroad.

The CAT is conducted to select students for the seven IIMs, and some other affiliated institutions. If there were viruses and malware in the system, candidates can now question their test results in court, he added. The disruption of the test now throws open the possibility of legal action by candidates, said Vijay Mukhi, a Mumbai-based expert on cybersecurity. It is also not clear what the IIMs have done to ensure data integrity, Mukhi said. Candidates for the CAT faced problems with connectivity as well, as the speeds were sluggish, Mukhi said. There could be doubts that the system did not register correctly the entry made by the candidates in their answers, he added.

Having local servers at the test centers linked up to the main servers would have prevented that problem, he added. About 8,000 of the 45,000 candidates for the first three days of the test had difficulties, he told reporters in Delhi this week. India's Minister for Human Resource Development, Kapil Sibal, has criticized the problems with the tests. The opposition Bharatiya Janata Party described the computer failures as a shame for a country that calls itself an IT superpower.