TOP Press Release Project Overview GRAPE-DR system Applications Member

Press Release Material(May 4, 2005 update)

<< Japanese Page
Update Title
May 4, 2005 "New Internet2 Land Speed Records Set By An Internatinal Research Group"
Nov 15, 2004 World's highest performance in internet communication is achieved at SC2004 bandwidth challenge
Oct 20, 2004 World's longest native 10Gigabit Ethernet connection established between Japan, and CERN Switzerland across research networks in United States, Canada, The Netherlands, and Japan
May 26, 2004 GRAPE-DR, project to build the World's Fastest Computer, started



World's highest performance in internet communication is achieved at SC2004 bandwidth challenge

Contact: Kei Hiraki
Data Reservoir project
The University of Tokyo
hiraki@is.s.u-tokyo.ac.jp
+ 81-90-6482-9169

Pittsburgh, PA USA, November 16, 2004 - Researchers from the University of Tokyo and the Japanese WIDE Project, together with engineers in Japan, Canada, the United States, the Netherlands and Switzerland completed the world's longest 10 Gigabit per second circuit ever recorded for the transmission of internet data. The high bandwidth link connected geographically dispersed servers from the University of Tokyo's Data Reservoir project stretching from the Supercomputing 2004 research exhibition in Pittsburgh Pennsylvania to the CERN research center in Geneva, Switzerland through Tokyo. The length of this fiber optic path is approximately 31,248 km, spanning 17 time zones. The link was used to perform high-speed TCP data transfers that will lead to unprecedented breakthroughs in collaborative physics and engineering experiments between dozens of research institutions worldwide without distance limitations.

In the experiment, 7.21 Gigabits-per-second TCP payload bandwidth was sustained on a single stream with standard 1500-Byte Ethernet packets between two servers connected by 31,248km network. This international cooperative project pushes the boundaries of global research and education and lays a foundation for a new array of international research opportunities.

Overview

Using 10 Gigabit networking technology that combines OC-192 Packet over SONET technology and 10 Gigabit Wide Area Networking technology, a local area network connecting computers at the Univerisity of Tokyo's SC2004 Exhibition booth in Pittsburg, PA was extended to include computers at CERN in Switzerland, connecting through SCinet, Abilene, JGN2 and APAN, Tokyo. The network from APAN to T-LEX was provided by the WIDE project. From T-LEX, the circuit was passed to Seattle using a wavelength donated by Tyco Telecommunications through the IEEAF, and cross-connected through facilities provided by Pacific Northwest Gigapop in Seattle. From Seattle the circuit was then carried across a dedicated lambda on the CA*net 4 network to the Chicago StarLight. At StarLight, the interconnect to SURFnet's Chicago-Amsterdam lambda was made, taking the connection to NetherLight in Amsterdam. Finally, between NetherLight and CERN, SURFnet's Amsterdam-Geneva lambda was used.



The data transfer is achieved between a pair of data-sharing Opteron systems from the Data Reservoir project, one server placed at SC2004 exhibition booth ofthe University of Tokyo and another at CERN, each equipped with a Chelsio T110 10 Gigabit Ethernet adapter supporting TCP/IP offload. Transfer rate of 7.21 Gbps was sustained for over 15 minutes using a single TCP stream and standard 1500-byte Ethernet frames over the 31,248 kilometer link. The combined bandwidth times distance value is a new world record at 225,298 terabit meters per second and is 80% greater than the previous Internet2 Land Speed Record of 124,935 terabit meters per second. At this transfer rate and distance, a full-length DVD can be transferred anywhere on the earth in under 5 seconds.

The Data Reservoir system also achieved a 1.6 Gbps disk-to-disk data transfer with a single quad-Opteron server with a Chelsio T110 TCP offload engine at each end of the connection. This performance figure shows that 200 Mbytes per second single box disk servers are readily available for wide range of researchers.



-

The demonstrations were made possible through the support of the following manufacturers, who have generously contributed their equipment and knowledge:
Foundry Networks, Nortel Networks, Cisco Systems, Bussan Networks, NTT communications and Net One Systems.

-


List of participants of the experiment

Network used in the experiment (from west to east)

The experiment is supported by:

- Special Coordination Fund for Promoting Science and Technology, Ministry of Education, Culture, Sports, Science and Technology, Japan
- Foundry Networks
- Juniper Networks
- NTT Communications
- NetOne Systems

-

University of Tokyo, Data Reservoir /GRAPE-DR Project is a research project funded by the Special Coordination Fund for Promoting Science and Technology, MEXT, Japan. The goal of the project is to establish a global data-sharing system for scientific data and to construct a very high-speed computing engine for simulation in astronomy, physics and bio-science. GRAPE-DR project will construct 2PFLOPS computing engine and global research infrastructure that utilize multi-10Gbps networks in 2008. This experiment is performed by cooperation of the University of Tokyo and Fujitsu Computer Technologies, LTD. For more information, visit:
http://data-reservoir.adm.s.u-tokyo.ac.jp/
http://grape-dr.adm.s.u-tokyo.ac.jp/
Contact: Kei Hiraki <hiraki@is.s.u-tokyo.ac.jp>

WIDE, a research consortium working on practical research and development of Internet-related technologies, was launched in 1988. The Project has made a significant contribution to development of the Internet by collaborating with many other bodies -- including 133 companies and 11 universities to carry out research in a wide range of fields, and by operating M.ROOT-SERVERS.NET, one of the DNS root servers, since 1997. WIDE Project also operates T-LEX (http://www.t-lex.net/) as an effort of stewardship for the IEEAF Pacific link in Tokyo.
Contact: <press@wide.ad.jp>
Tel: +81-466-49-3618 (c/o KEIO Research Institute at SFC)
Fax: +81-466-49-3622

APAN (Asia-Pacific Advanced Network ) is a non-profit international consortium established on 3 June 1997. APAN is designed to be a high-performance network for research and development on advanced next generation applications and services. APAN provides an advanced networking environment for the research and education community in the Asia-Pacific region, and promotes global collaboration.

Its objectives are:
(a) to coordinate and promote network technology developments and advances in network-based applications and services;
(b) to coordinate the development of an advanced networking environment for research and education communities in the Asia-Pacific region; and
(c) to encourage and promote global cooperation to help achieve the above.
http://www.apan.net

JGN 2 is an open testbed network environment for research and development, which was previously operated by JGN (Japan Gigabit Network : Gigabit network for R&D) from April 1999 to March 2004, and expanded by the National Institute of Information and Communications Technology (hereinafter NICT) as a new ultra-high-speed testbed networks for R&D collaboration between industry, academia, government with the aim of promoting a broad spectrum of research and development projects, ranging from fundamental core research and development to advanced experimental testing, in areas including the advancement of network-related technologies for the next generation and diverse range of network application technologies.
http://www.jgn.nict.go.jp/e/02-about/02-1/index.html

CANARIE is Canada's advanced Internet organization, a not-for-profit corporation that facilitates the development and use of next-generation research networks and the applications and services that run on them. By promoting collaboration among key sectors and by partnering with similar initiatives around the world, CANARIE stimulates innovation and growth and helps to deliver social, cultural, and economic benefits to all Canadians. CANARIE positions Canada as the global leader in advanced networking, and is supported by its members, project partners, and the Government of Canada. CANARIE developed and operates CA*net 4, Canada's national research and education network. For more information, visit:
http://www.canarie.ca/

CERN is the European Laboratory for Particle Physics, one of the world's most prestigious centers for fundamental research. The laboratory is currently building the Large Hadron Collider. The most ambitious scientific undertaking the world has yet seen, the LHC will collide tiny fragments of matter head on to unravel the fundamental laws of nature. It is due to switch on in 2007 and will be used to answer some of the most fundamental questions of science by some 7,000 scientists from universities and laboratories all around the world. For more information, visit:
http://www.cern.ch/

Pacific Northwest Gigapop is the Northwest's Next Generation Internet, applications cooperative, testbed, point of presence; home to the Pacific Wave international peering exchange; and joint steward with WIDE of the IEEAF trans-Pacific link. PNWGP and Pacific Wave connect together high-performance international and federal research networks with universities, research organizations, and leading edge R&D and new media enterprises throughout Washington, Alaska, Idaho, Montana, Oregon, and the Pacific Rim. For more information, visit:
http://www.pnw-gigapop.net/

SURFnet operates and innovates the national research network, to which over 150 institutions in higher education and research in the Netherlands are connected. The organization is among the leading research network operators in the world. SURFnet is responsible for the realization of GigaPort Next Generation Network, a project of the Dutch government, trade and industry, educational institutions and research institutes to strengthen the national knowledge infrastructure. Research on optical and IP networking and grids are a prominent part of the project. For more information, visit:
http://www.surfnet.nl/

The Internet Educational Equal Access Foundation (IEEAF) is a non-profit organization whose mission is to obtain donations of telecommunications capacity and equipment and make them available for use by the global research and education community. The IEEAF TransPacific Link is the second 10 Gbps transoceanic link provided by IEEAF through a five year IRU donated by Tyco Telecom; the first, the IEEAF TransAtlantic Link, connects New York and Groningen, The Netherlands, and has been operational since 2002. IEEAF donations currently span 17 time zones. For more information, visit:
http://www.ieeaf.org/

Chelsio Communications is the established leader in 10-Gigabit Ethernet server adapters and protocol acceleration technology. Chelsio's programmable T110 10GbE Protocol Engine is the only 10Gbps Ethernet adapter available today providing full TCP/IP offload and iSCSI acceleration. The T110 has been independently verified as the highest throughput, lowest latency and most CPU efficient Ethernet adapter in the industry - all with standard 1500-byte Ethernet frames. The T110 dramatically improves application performance by offloading processor-intensive network and storage protocol stacks from overburdened processors, returning processing cycles to the application to enhance overall system performance. For detail information, visit:
http://www.chelsio.com



World's longest native 10Gigabit Ethernet connection established between Japan, and CERN Switzerland across research networks in United States, Canada, The Netherlands, and Japan

Overview

October 18, 2004 -- Engineers in Japan, Canada, United States, The Netherlands, and CERN Switzerland completed the world's longest native 10Gigabit Ethernet circuit for the transmission of data from the Japanese Data Reservoir project to the CERN research center in Geneva, Switzerland. The length of this light path is approximately 18,500 km and spans 17 time zones.

This international cooperative project pushes the boundaries of global research and education networking and lays a foundation for a new array of international research opportunities.

-

Data Reservoir placed at CERNUsing 10Gigabit Ethernet WAN PHY technology a local area network connecting computers at the University of Tokyo was extended to include computers at CERN so that they all appeared to be on the same LAN. The connection from the University of Tokyo to T-LEX was provided by the WIDE project. From T-LEX, the circuit was passed to Seattle using a wavelength donated by Tyco Telecommunications through the IEEAF, and cross connected through facilities provided by Pacific Northwest Gigapop in Seattle. From Seattle the circuit was then carried across a dedicated lambda on the CA*net 4 network to the Chicago StarLight. At StarLight, the interconnect to SURFnet's Chicago-Amsterdam lambda was made, taking the connection to NetherLight in Amsterdam. Finally, between NetherLight and CERN, SURFnet's Amsterdam-Geneva lambda was used.

The network connection involved interconnecting optical lambdas across equipment from a variety of vendors including Foundry Networks, Nortel Networks and Cisco Systems. This is believed to be the first demonstration of the interoperation of 10Gigabit Ethernet WAN PHY and optical SONET/SDH equipment from these vendors.

The 10Gigabit Ethernet connection will be used by the Data Reservoir/GRAPE-DR project of the University of Tokyo to test the optimization and transfer of larger TCP data flows across such a long fat pipe facility. Such transfers are of particular relevance to the ATLAS experiment at CERN's future Large Hadron Collider, where the University of Tokyo is contributing a data analysis center. The data transfer is achieved between a pair of data-sharing systems Data Reservoir placed at the University of Tokyo and CERN. An average transfer rate of 7.57 Gbps was achieved for a single TCP stream, using standard Ethernet frames, between two high-end servers equipped with Chelsio T110 10Gigabit Ethernet adapters.

The Data Reservoir system also achieved a 9 Gbps disk-to-disk data transfer with 9 Xeon servers at each end of the connection. This performance figure has not been reported before on an intercontinental disk-to-disk situation.

This networking experiment complements and supports activities underway in the Global Lambda Integrated Facility (GLIF). Most of the participants in this effort are also participants in GLIF.

The demonstrations were made possible through the support of the following manufacturers, who have generously contributed their equipment and knowledge: Foundry Networks, Nortel Networks, Chelsio Communications, Cisco Systems, Bussan Networks, and Net One Systems.

We acknowledge the support of: the European Union project ESTA (IST-2001-33182), CERN OpenLAB, SARA, Global Crossing, Industry Canada, NTT Communications, Special Coordination Fund for Promoting Science and Technology, MEXT, Japan, and ITC of the University of Tokyo.

-

CANARIE is Canada's advanced Internet organization, a not-for-profit corporation that facilitates the development and use of next-generation research networks and the applications and services that run on them. By promoting collaboration among key sectors and by partnering with similar initiatives around the world, CANARIE stimulates innovation and growth and helps to deliver social, cultural, and economic benefits to all Canadians. CANARIE positions Canada as the global leader in advanced networking, and is supported by its members, project partners, and the Government of Canada. CANARIE developed and operates CA*net 4, Canada's national research and education network. For more information, visit:
http://www.canarie.ca/

CERN is the European Laboratory for Particle Physics, one of the world's most prestigious centers for fundamental research. The laboratory is currently building the Large Hadron Collider. The most ambitious scientific undertaking the world has yet seen, the LHC will collide tiny fragments of matter head on to unravel the fundamental laws of nature. It is due to switch on in 2007 and will be used to answer some of the most fundamental questions of science by some 7,000 scientists from universities and laboratories all around the world. For more information, visit:
http://www.cern.ch/

Pacific Northwest Gigapop is the Northwest's Next Generation Internet, applications cooperative, testbed, point of presence; home to the Pacific Wave international peering exchange; and joint steward with WIDE of the IEEAF trans-Pacific link. PNWGP and Pacific Wave connect together high-performance international and federal research networks with universities, research organizations, and leading edge R&D and new media enterprises throughout Washington, Alaska, Idaho, Montana, Oregon, and the Pacific Rim. For more information, visit:
http://www.pnw-gigapop.net/

SURFnet operates and innovates the national research network, to which over 150 institutions in higher education and research in the Netherlands are connected. The organization is among the leading research network operators in the world. SURFnet is responsible for the realization of GigaPort Next Generation Network, a project of the Dutch government, trade and industry, educational institutions and research institutes to strengthen the national knowledge infrastructure. Research on optical and IP networking and grids are a prominent part of the project. For more information, visit:
http://www.surfnet.nl/

University of Tokyo, Data Reservoir /GRAPE-DR Project is a research project funded by the Special Coordination Fund for Promoting Science and Technology, MEXT, Japan. The goal of the project is to establish a global data-sharing system for scientific data and to construct a very high-speed computing engine for simulation in astronomy, physics and bio-science. GRAPE-DR project will construct 2PFLOPS computing engine and global research infrastructure that utilize multi-10Gbps networks in 2008. This experiment is performed by cooperation of the University of Tokyo and Fujitsu Computer Technologies, LTD. For more information, visit:
http://data-reservoir.adm.s.u-tokyo.ac.jp/
http://grape-dr.adm.s.u-tokyo.ac.jp/
Contact: Kei Hiraki <hiraki@is.s.u-tokyo.ac.jp>
Tel: +81-3-5841-4039
Fax: +81-3-3818-1073

WIDE, a research consortium working on practical research and development of Internet-related technologies, was launched in 1988. The Project has made a significant contribution to development of the Internet by collaborating with many other bodies -- including 133 companies and 11 universities to carry out research in a wide range of fields, and by operating M.ROOT-SERVERS.NET, one of the DNS root servers, since 1997. WIDE Project also operates T-LEX (http://www.t-lex.net/) as an effort of stewardship for the IEEAF Pacific link in Tokyo.
Contact: <>
Tel: +81-466-49-3618 (c/o KEIO Research Institute at SFC)
Fax: +81-466-49-3622

The Internet Educational Equal Access Foundation (IEEAF) is a non-profit organization whose mission is to obtain donations of telecommunications capacity and equipment and make them available for use by the global research and education community. The IEEAF TransPacific Link is the second 10 Gbps transoceanic link provided by IEEAF through a five year IRU donated by Tyco Telecom; the first, the IEEAF TransAtlantic Link, connects New York and Groningen, The Netherlands, and has been operational since 2002. IEEAF donations currently span 17 time zones. For more information, visit:
http://www.ieeaf.org/

GLIF is a consortium of institutions, organizations, consortia and country National Research Networks who voluntarily share optical networking resources and expertise for the advancement of scientific collaboration and discovery, under the leadership of SURFnet and University of Amsterdam in The Netherlands. For more information, visit:
http://www.glif.is/

<< TOP page   TOP of this page



GRAPE-DR, project to build the World's Fastest Computer, started

Contact: Kei Hiraki
Graduate School of Information Science and Technology,
the University of Tokyo
e-mail: hiraki@is.s.u-tokyo.ac.jp
  Junichiro Makino
Graduate School of Science, the University of Tokyo
email:makino@astron.s.u-tokyo.ac.jp

Key Points

By 2008, we develop infrastructure for scientific research, with 2PFLOPS computation power, 40Gbps data transfer speed, and rich applications for numerical computation.

Overview

GRAPE-DR Project is a research project funded by the Special Coordination Fund for Promoting Science and Technology, of MEXT, Japan. Project group consists of the University of Tokyo, National Institute of Information and Communications Technology, Ntt Communications, National Astronomical Observatory of Japan, and RIKEN. Representative of the project is Professor Kei Hiraki, Graduate School of Information Science and Technology, the University of Tokyo. The goal of the project is to construct 2PFLOPS computing engine and global research infrastructure that utilize multi-10Gbps networks by 2008.

Among all domestic and oversea ultra-high-speed computer system projects currently underway, the GRAPE-DR project will go beyond he wall of PFLOPS first(Note 1).

GRAPE-DR will accomplish 2PFLOPS by integrating 2 million arithmetic processors in high density into the size of less than 10 racks(Note2), and a network and file system will be implemented by highly integrated IP disk technology(Note3).

Project's home page: http://grape-dr.adm.s.u-tokyo.ac.jp

(Note1) Domestic projects: Earth Simulator, (40TFLOPS, 2002, (installed)), GRAPE-6 (64TFLOPS, 2001, (installed)), Grid by a NAREGI cluster computer (100TFLOPS, 2007).
  Overseas projects: BlueGene (360TFLOPS, 2005, IBM(USA)), ASCI-Purple(100TFLOPS, 2005, Los Alamos National Laboratory), CRAY X1 and others(50TFLOPS, the end of 2005, Oak Ridge National Laboratory, PERCS (1PFLOPS, 2008, IBM(USA)), Hero (1PFLOPS, 2008, Sun Microsystems(USA)), Cascade (1 PFLOPS, 2010, CRAY (USA))
(Note2) A computing speed of 2PFLOPS is 50 times as fast as the speed of the world's fastest earth simulator. It carries out 2,000 trillion computations per second.
(Note3) The IP disk technology refers to a technology to connect a computer and disk without a dedicated bus (such as SCSI bus or fiber channel), but using general IP networks (Ethernet, for example). Data can be shared efficiently by integrating an input/output network and storage network.

1. Outline

With rapid increase in CPU speed, network speed, memory capacity and storage capacity, clusters of computers connected each other by high-speed network have become an indispensable infrastructure for scientific researches. As of 2004, in both USA and Europe, Internet backbones for scientific research have in excess of 10Gbps bandwidth, which connects computers and storages at super-computer centers. Regarding the speed of scientific computation, machines with multi-teraflops are now standard, with our GRAPE-6(64TFLOPS), Earth Simulator(40TFLOPS) and BlueGene/L. Effective utilization of such an information infrastructure is important for development of science in the future.

The GRID technology, which is intended to utilize computer and storage resources distributed on the Internet without regard to location, has been actively studied since the second half of the 1990's, and is being introduced to Japan. However, the GRID technology centering on middleware is barely used in actual research fields, except the researches in GRID itself, despite high expectations. On the other hand, cluster system, used for high computational power at low cost, has limitations on the number of servers, and naive accumulation of ordinary MPU cannot achieve over-100-TFLOPS.

The GRAPE-DR project is intended to overcome these problems and construct a new information infrastructure with the world's top performance, utilizing high-speed network and high-capacity storage. Specifically, (1) distributed shared data (including data transferring), (2) ultrahigh-speed computation with distributed transparency, and (3) distributed ultrahigh-speed database processing for scientific data (retrieval of complex data, data mining, etc.) are to be implemented. We provide information services for real scientists, without any special knowledge of computer science.

2. Towards 2PFLOPS

Construction of an ultrahigh-speed computer system exceeding performance of a possible single processor inevitably requires a parallel distributed architecture, and in fact, all the ultrahigh-speed computer system projects in practice adopt such an architecture.

However, to implement a system with performance over PFLOPS, (1) physical space, (2) power consumption, (3) substantial performance(implementation efficiency) to a user, and (4) system reliability are confronted as formidable tasks. For example, implementation of the system with over-PFLOPS by a cluster computer, which connects a great number of servers using ordinary processors, or by the Grid system connecting cluster computer, which connects a great number of general servers using general processors, or by the Grid system connecting clustser computers in a distributed manner, is very difficult(Note4). In other words, the system required by scientific research fields must possess (1) compactness, (2) low power consumption, (3) high execution efficiency, and (4) reliability allowing researchers direct access.

The GRAPE-DR system will perform those difficult tasks through generalization of super-parallel computing methods gained while developing GRAPE series(Note5), and decentralization by high-integration of a processor and making a storage IC-compatible; specifically the following are the technological bases.

(Note4) The world's fastest cluster, ASCI-Purple system, possesses the computing capability of 8GFLOPS per each node. Assuming the size and power consumption to be 1U and 200W per processor, respectively, only the processor part requires the size of 312 racks(40U racks height) and 2.5 MW of electricity. Even if operation performance of a processor is assumed to double in the next few years, implementation of 1 PFLOPS needs 1,560 racks and power consumption of 125MW; accordingly, implementation is very difficult and erosion of reliability is expected.
As for the Grid system, intra-cluster communication delay is a few microseconds and inter-cluster communication delay is a few milliseconds. Compared to the operation time of a single processor, intra-cluster and inter-cluster communication delay are few thousand and few million times more, respectively. Therefore, effective use of a parallelism of a single program is difficult. Consequently, trivial parallelism, such as genetic algorithm, etc., is mainly treated by cluster systems, -- of the Grid system, in particular.
(Note5) An astronomical simulation computer, GRAPE series, have been invented by Junichiro Makino, a core member of GRAPE-DR project, and achieved remarkable results in the areas of planet and galaxy generation. The GRAPE series has been awarded the Gordon Bell Prize for the fastest practical computation six times: in 1995, 1996, 1999, 2000, 2001, and 2003. The latest GRAPE-6 attained 64TFLOPS. Since connections among operators were fixed within GRAPE-1 to 6 for n-body problem, number of applicable application was limited; whereas GRAPE-DR allows variable connections among operators, enabling general-purpose use.
(Note6) Since GRAPE-DR directly connects among processors with a pipeline network and directly passes operation among processors, it can escape from the memory access bottleneck problem, which we call static data-flow.
(Note7) Data Reservoir is a distributed data sharing system developed by the Data Reservoir project funded by the Special Coordination Fund for Promoting Science and Technology, MEXT, Japan, conducted from 2001 to 2004 by Hiraki and Inaba. It has received awards for disk data transfer between Japan and USA twice, in 2002 and 2003, by Band Width Challenge, established by the International Conference on Super Computing.

3. Scientific Computation by GRAPE-DR

The purpose of high-speed computer system for science is not to mark high figures with a benchmark program, but to accomplish meaningful computation results in terms of science. The GRAPE-DR project is intended to develop software and produce meaningful computation results in terms of science, for the following seven areas as representatives: (1) gravity many-body system simulation, (2) grid-less fluid computation by SPH(smoothed Particle Hydodynamics), (3) molecular dynamics, (4) boundary element calculation, (5) genome sequence matching, (6) hyper-order mass spectrometry data search, and (7) function search of genome database.

Figure 1. GRAPE-DR hardware design

Figure 1. GRAPE-DR hardware design

Figure 2. Static Data FlowFigure 2. Static Data Flow

<< TOP page   TOP of this page
(C) GRAPE-DR Project