Subscribe: GridCast: live and behind the scenes of grid computing
http://gridtalk-project.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
clcar  cloud  community  computing  conference  data  high performance  high  hpc  performance  research  science  year   
Rate this Feed
Rating: 2.5 starRating: 2.5 starRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: GridCast: live and behind the scenes of grid computing

GridCast: live and behind the scenes of grid computing



The GridCast team blog live from the top grid computing events around the world.



Updated: 2018-04-23T14:24:52.940+02:00

 



SC14: Looking back, but forward thinking

2014-11-22T17:48:32.392+01:00

As I reflect on SC14, I wanted to share some observations that are bugging me. I’d love to hear your thoughts (itbeth2@gmail.com).

1. Several colleagues who worked ten or more years for public universities have left academia for the commercial sector. Sadly, their intellectual contributions will no longer shape student futures. For this reason, I think everyone should volunteer to support STEM in their local schools.

2. With fewer academics funded to attend SC, has anyone noticed an impact to the technical track offerings? How will federal and university employees keep their skills fresh if they can’t attend conferences? How soon will this skill gap impact the global workforce pipeline? Let’s use this to help frame the case for greater federal travel and conferencing support for STEM activities.

3. When they saw my program committee badge, two vendors complained to me that they swiped fewer badges on the floor this year. One said he captured 2,000 in 2013, but only 1,200 this year. He was worried his company wouldn’t send him to SC15. We need to do whatever we can to bring people into the show floor next year. Open it up to the public, if necessary. Admit local business owners, educators, school groups, and STEM clubs for free.

4. Would the natural competitive element that’s essential (and inherent) to those who function in sales roles erode the platform of collegiality and cross-cultural collaboration that SC is well known for? I’ve worked in sales—I know how they roll.

5. I fear a profit-driven culture will obfuscate the learning process. It’s one thing if a scholar prefers one solution over another, but vendors are paid to support their stuff.

6. There’s an emphasis on entrepreneurship, which is great, but I fear students are ill prepared to market themselves and their innovation. Not everyone is capable of running a business. Maybe it would be helpful to form a Rotarian chapter for STEM entrepreneurs?

7. And last, but never least, we must continue to develop a systemic effort to broaden the participation of underrepresented groups and regions so that innovation is driven by a STEM community that understands the grandest challenges faced by the world we share.

OK, time to pack. Au revoir, ma belle Nouvelle-Orléans!



Pilot allows researchers to log in with local campus credentials to access U.S. hosted astrophysics data

2014-09-04T18:27:35.466+02:00

Internet2’s InCommon has enhanced its support for international research organizations through a pilot project with the Leonard E. Parker Center for Gravitation, Cosmology and Astrophysics (CGCA) at the University of Wisconsin-Milwaukee, U.S.The pilot will enable astronomers world-wide to use their local campus credentials to log into three UWM-based services, including astronomers from Laser Interferometer Gravitational Wave Observatory (LIGO), a project to detect and study gravitational waves from astrophysical objects such as black holes and supernovae. The CGCA plays a key role in LIGO, which was the impetus for creating these collaboration services for gravitational wave and other astronomers. By participating in the pilot, CGCA identity management staff are streamlining the access process to these important tools, while simultaneously saving time and effort by not having to create and maintain separate (duplicate) user IDs and passwords for hundreds of researchers worldwide. The new approach will enable researchers to gain immediate access to these resources by simply logging in with the home campus-issued credentials they already have in place.This map demonstrates the interest in worldwide collaboration. Those countries with national research and education federations participating in eduGAIN are in green, with countries in the process of joining in yellow.InCommon has previously partnered with CGCA and LIGO to provide secure federated access for researchers at U.S. institutions. By joining the international eduGAIN service, InCommon extends this benefit to researchers in other parts of the world. InCommon participants can make this process even easier by supporting the Research & Scholarship (R&S) program, in which a campus automatically releases a small number of user attributes to all services tagged as R&S. This allows researchers to access a service with little or no intervention from their central campus IT department, while still maintaining full control and being in full compliance with federal, state and campus privacy requirements.The global InCommon-eduGAIN pilot involves exporting the metadata about these three CGCA services to the international eduGAIN service, which provides trustworthy exchange of information among national research and education federations, like InCommon. The three services include: the Gravitational Wave Astronomy Community Registry; the Gravitational Wave Astronomy Community Wiki; and the Gravitational Wave Astronomy Community List Server. All three services are tagged for R&S.“We are delighted to pilot the world-wide sharing of CGCA and LIGO services,” said Klara Jelinkova, chair of the InCommon Steering Committee and Senior Associate Vice President and Chief Information Technology Officer at the University of Chicago. “As a community we are indebted to the University of Wisconsin-Milwaukee, and particularly LIGO’s Scott Koranda and Warren Anderson, as fellow innovation pioneers in our international efforts to support research and education.”​Computer simulation of two black holes merging into one, and the release of energy in the form of gravitational waves. Photo credit Bernd Brügmann (Principal Investigator), Max Planck Institute for Astrophysics, Garching, Germany.About Internet2Internet2® is a member-owned advanced technology community founded by the nation's leading higher education institutions in 1996. Internet2 provides a collaborative environment for U.S. research and education organizations to solve common technology challenges, and to develop innovative solutions in support of their educational, research, and community service missions.Internet2 also operates the nation’s largest and fastest coast-to-coast research and education network, in which the Network Operations Center is powered by Indiana University. Internet2 serves more than 93,000 community anchor institutions, 250 U.S. universities, 70 government agencies, 38 regional and state education networks, 80 leading corporations working with our com[...]



Why broader engagement (with HPC) matters!

2014-07-30T18:13:46.311+02:00

(image)
The Supercomputing Conference Broader Engagement (SC-BE) program provides an on-ramp to the high-tech conference experience for minorities, women and people with disabilities. Read why SC13-BE was especially important to Corey Henderson (University of Wisconsin-Madison-US), and his conference mentor, Richard Barrett (Sandia National Laboratories-US). Via STEM-Trek.







Supercomputing and Distributed Systems Camping School 2014, Now in the Coffee Land!

2014-07-28T14:20:55.115+02:00

In a spectacular and natural environment, in the heart of the coffee land in Colombia, researchers and students will be held in the 2014 version of the Supercomputing and Distributed Systems Camp 2014. Inspired by the idea to meet technology and nature, SC-Camp is an initiative of researchers to offer to undergraduate and master students state-of-the-art lectures and programming practical sessions upon High Performance and Distributed Computing topics. SC-Camp is a non-profit event, addressed to every student including those that lack of financial backup. Last year all students that applyied in due time received a grant to attend lectures, including meals and accomadations. This year the event will be hosted by BIOS Centro de Bioinformática Y Biología Computacional in the outstanding park of Manizales, Colombia.This year, the summer school is focused in bioinformatics and technology trends to biology and life sciences. SC-Camp 2014 features 6 days of scientific sessions and 1 leisure day with an organized activity (for example, hiking or rafting). During the lectures there will be held several parallel programming practical sessions.We welcome applications of undergraduate (preferable at last year) or master students from all areas of engineering and science with strong interest upon High Performance and Distributed Computing. Due to the advanced content of lectures some basic notions of Parallel and Distributed Computing along with programming skills are desired. All courses and lectures will be held in English, thus a good knowledge of English -both oral and written- is mandatory. The scientific and steering committee will evaluate the application forms based on the applicant's scientific background and their motivation letter. This year, in a first time, we expect to accept from 80 up to 120 students. The registration fee includes accommodation and access to all scientific lectures. More information and registration in http://www.sc-camp.org[...]



CHPC: It's time to think wildly different for a change! Let's design an energy-efficient HPC system!

2014-06-30T22:14:46.310+02:00

Kruger National Park is home to Africa's Big Five: LionLeopard, Cape Buffalo, Rhino, and ElephantSouth Africa's Center for High Performance Computing (CHPC) invites everyone to attend their 2014 National Meeting in The Kruger National Park. One of the world's largest wildlife reserves will offer the perfect backdrop for this year's program which will focus on the development of HPC systems and applications that leave little or no environmental footprint. Additionally, there's an emphasis on workforce development.One driver behind CHPC's priorities is the Square Kilometer Array (SKA) project: the most powerful telescope ever designed. The iconic endeavor is being installed in the extraordinarily “radio quiet” Karoo region of South Africa in the Northern Cape Province, and will include remote stations in SKA African partner countries such as Botswana, Namibia, Mozambique, Mauritius, Madagascar, Kenya and Ghana. Cooler-running systems and applications are needed for facilities located in remote, warm climates. Additionally, SKA and the projects that will grow from its roots need an indigenous workforce that's prepared for the future. The conference welcomes the contributions and expectations of policy-makers, multidisciplinary research communities, vendors, and academia through a series of contributed and invited papers, presentations and open discussion forums. Pre-conference tutorials will speak to the heart of HPC. The main session will include plenary talks and numerous parallel breakaway sessions. The content will be of interest to participants from all scientific domains, with an over-arching emphasis on the research priorities of stakeholders. 2013 SADC ForumTwo administrative forums will convene during the conference: the Industrial HPC Advisory Forum and the Southern African Development Community (SADC) Forum. It'll be exciting to learn how far SADC has come since the 2013 meeting where they discussed the development of a shared e-infrastructure for SADC member-states and their collaborators. With several points of presence in sub-Saharan Africa, they are laying the foundation for  a world-class research cyberinfrastructure in a proving ground that holds tremendous opportunity for multinational collaboration, innovation and discovery.  The South African CHPC Team defended their titleat the ISC14 student cluster challengelast week in Leipzig, Germany! Go SA!!The e-infrastructure will not only provide SADC member states with additional computational resources, the community that uses it will lend diversity to the global HPC workforce. At the International Supercomputing Conference in Leipzig last week, the CHPC team won the student cluster challenge for the second year in a row! In December, the next generation will battle it out for a chance to compete at ISC15. With the unique location of this conference, space could be limited. Register today for a once-in-a-lifetime opportunity! [...]



Putting the ‘super’ in supercomputing at ISC’14

2014-06-24T15:17:56.341+02:00

Image courtesy Tim Krieger/ISC Events.This week, International Science Grid This Week (iSGTW)is attending the International Supercomputing Conference (ISC’14) in Leipzig, Germany. The event features a range of speakers representing a wide variety of research domains. This includes a fascinating keynote talk given on Monday morning by Klaus Schulten, director of the theoretical and computational biophysics group at the University of Illinois at Urbana-Champaign, US, on the topic of large-scale computing in biomedicine and bioengineering.A number of high-profile prizes were also awarded on Monday. The ISC award for the best research poster went to Truong Vinh Truong Duy of the Japan Advanced Institute of Science and Technology and the University of Tokyo, Japan. He presented work on OpenFFT, which is an open-source parallel library for computing three-dimensional ‘fast Fourier transforms’ (3-D FFTs).Meanwhile, both the Partnership for Advanced Computing in Europe (PRACE) and Germany’s Gauss Centre for Supercomputing  awarded prizes for the best research papers. The PRACE award went to a team from the Technical University of Munich and the Ludwig Maximilian University of Munich, Germany, for its work optimizing software used to simulate seismic activity in realistic three-dimensional Earth models. Meanwhile, the GAUSS award went to a team from IBM Researchand the Delft University of Technology in the Netherlands for their analysis of the compute, memory, and bandwidth requirements for the key algorithms to be used in the Square Kilometre Array radio telescope (SKA), which is set to begin the first phase of its construction in 2018.Another source of competition at the event is the announcement of the new TOP500 list of the world’s fastest supercomputers. The new list held little in the way of surprises, with China’s Tianhe-2 remaining the fastest supercomputer in the world by a significant margin. Titan at Oak Ridge National Laboratory in Tennessee, US, and Sequoia at Lawrence Livermore National Laboratory in California, US, remain the second and third fastest systems in the world. The Swiss National Computing Centre’s Piz Daint is once again Europe’s fastest supercomputer and is also the most energy efficient in the top ten. Perhaps the most interesting aspect of Monday’s announcement, however, is the fact that for the second consecutive list, the overall growth rate of all the systems is at a historical low.Read more in our full feature article in iSGTW next week.[...]



EGI Federated Cloud launched at community event in Helsinki

2014-05-21T16:20:54.046+02:00

The EGI Community Forum 2014 was hosted at the University of Helsinki.This week, iSGTW is attending the European Grid Infrastructure (EGI) Community Forum 2014 in Helsinki, Finland. So far, the event has seen the launch of the EGI Federated Cloud, as well as a range of exciting presentations about how grid and other e-infrastructures are being used to advance excellent science in Europe.The EGI Federated Cloud has been created to support development and innovation within the European Research Area and was designed in collaboration with a wide range of research communities from across the continent. Built on the experience of supporting scientists’ IT needs for over ten years, the EGI Federated Cloud provides researchers with a flexible, scalable, standards-based cloud infrastructure.“The Federated Cloud is the next step in evolution for EGI,” says EGI.eu managing director Yannick Legré. “We have to support the researchers and understand their needs so we can engage, grow and innovate together.” At launch, the EGI Federated Cloud offers 5,000 cores and 225 terabytes of storage. However, this is set to increase to 18,000 cores and 6,000 terabytes before the end of this year. Legré also recently revealed to iSGTW that EGI has the goal of ramping this up further to 10,000,0000 cores and 1 exabyte (1,000,000 terabytes) of storage by 2020.This ambitious vision was reiterated during yesterday’s launch by David Wallom, chair of EGI’s Federated Clouds Task Force. “I am delighted to be able to announce that after so much hard work from everyone involved we now have a research-orientated cloud platform based on open standards that is ready to support every researcher in Europe,” says Wallom. “This is an important milestone for all areas of research in Europe.”Another highlight of the first days of the EGI Community Forum was a speech given by Thierry van der Pyl, director of ‘excellence in science’ in the European Commission Directorate General for Communications Networks, Content, and Technology (DG CONNECT). “Today, science itself is being transformed: All disciplines are now becoming computational, with more and more data to be processed,” says Van der Pyl. “E-infrastructures are part of the digital revolution transforming our society, re-inventing industry, and changing science.”During his talk, Van der Pyl also praised EGI for the progress it has made over the last decade: “I would like to congratulate the EGI community for its achievements in building a truly European infrastructure — I think this is a remarkable result.”Be sure to follow iSGTW on Twitter, Facebook, and Google+ for further updates from the event under the hashtag #EGICF14. We'll also have a full roundup of the event in our 28 May issue.- Andrew Purcell is editor-in-chief of International Science Grid This Week (iSGTW).[...]



Apply by June 15 for the SC14 Broader Engagement (BE) program

2014-04-27T00:36:27.157+02:00

(image)
SC13 BE Scholars
Every November, more than 10,000 of the world’s leading experts in high performance computing (HPC), networking, data storage and analysis convene for the international Supercomputing (SC) conference, the world’s largest meeting of its kind. To help increase the participation of individuals who have been traditionally under-represented in HPC, the SC conference sponsors the Broader Engagement (BE) program. SC14 will be held Nov. 16-21, 2014 in New Orleans. Complete conference information can be found at: http://sc14.supercomputing.org.

Applications are now being accepted for the SC14 BE program, which offers special activities to introduce, engage and support a diverse community in the conference and in HPC. Competitive grants are available to support limited travel to and participation in the SC14 Technical Program. Consideration will be given to applicants from groups that traditionally have been under-represented in HPC, including women, African Americans, Hispanics, Native Americans, Alaska Natives, Pacific Islanders and people with disabilities. We encourage applications from people in all computing-related disciplines—from research, education and industry.

If you don’t need support but would like to participate, please register to attend the BE workshop or volunteer to serve as a mentor in BE’s Mentor-Protégé Program to support newcomers to the conference.

Questions? Contact be@info.supercomputing.org.

To apply, visit: https://submissions.supercomputing.org/



Call for Papers for CARLA 2014 Conference - First HPCLaTAM - CLCAR Joint Conference in Valparaiso, Chile.

2014-04-11T21:21:15.909+02:00

Call for Papers - CARLA 2014: http://www.ccarla.org/Latin America High Performance Computing Conference First HPCLATAM – CLCAR Joint ConferenceOrganized by CCTVal (USM) & NLHPC (UCHILE)20 - 22 October 2014, Valparaíso, ChileIMPORTANT DATESPaper submission deadline: 15 June 2014Notification of acceptance: 15 August 2014Camera-ready papers due: 15 September 2014Conference dates: 20 - 22 October 2014AIMS & SCOPEBuilding on the success of the previous editions of the HPCLATAM and the CLCAR Conferences, in 2014 both major HPC Latin-American workshops will joint in CARLA 2014, to be held in Valparaíso Chile. This conference also includes the Third High Performance Computing School-ECAR 2014 (from 13-17 October, 2014).The main goal of the CARLA 2014 conference is to provide a regional forum fostering the growth of the HPC community in Latin America through the exchange and dissemination of new ideas, techniques, and research in HPC. The conference will feature invited talks from academy and industry, short- and full-paper sessions presenting both mature work and new ideas in research and industrial applications. Suggested topics of interest include, but are not restricted to: - Parallel Algorithms and Architectures- Parallel Computing Technologies- High Performance Computing Applications- Distributed systems- GPU Computing: Methods, Libraries and Applications- Grid and Cloud Computing- Data Management and Visualizations and Software Tools- Tools and Environments for High Performance System EngineeringINSTRUCTIONS FOR AUTHORSAuthors are invited to submit full and short papers written in English, according to the Springer LNCS style (http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Full papers must have between 10 to 15 pages and short papers must have between 4 to 6 pages. Papers must not have been previously published or submitted elsewhere. Submissions will be handled electronically using the Easy Chair system (https://www.easychair.org/conferences/?conf=carla2014). All paper submissions will be carefully reviewed by at least two experts and returned to the author(s). The authors of accepted papers must guarantee that their paper will be presented at the conference. All accepted full papers will be included in the CARLA 2014 proceedings that will be published in the serie CCIS of Springer: Communications in Computer and Information Science (http://www.springer.com/series/7899). In addition, authors of accepted full papers will be invited to submit extended versions to a Special Issue of Cluster Computing (Impact factor: 0.776):http://www.springer.com/computer/communication+networks/journal/10586. All accepted short papers will be published electronically in the “Book of Ongoing Works”. [...]



Cloud for a smart economy and smart society at Cloudscape VI

2014-02-25T16:35:13.002+01:00

Cloudscape VI came to a close just moments ago. The event, which was held at the Microsoft Centre in Brussels, Belgium, featured much discussion of the legal aspects of cloud computing, as well as the potential benefits of cloud computing for both business and scientific research. Bob Jones, head of CERN openlab, also gave an important update on Helix Nebula at the event and we’ll have full coverage of this, as well as the rest of the conference highlights, in next week’s issue of iSGTW.Ken Ducatel, head of software and services, cloud computing at the European Commission, spoke at Cloudscape VI about the variety of business models still evolving around cloud computing. “There are a lot of business models and they’re very complex: there’s no one-size fits all solution,” he says.Meanwhile, Linda Strick, coordinator of the Cloud for Europe project, spoke at the event about how the different nation states which exist in Europe can make the provision of public services via the cloud particularly difficult.  “We need to initiate dialogues between public sector and industry, and address concerns on data protection, security, legal, and contractual aspects,” says Strick.In addition, the environmental impacts of cloud computing were discussed at the event. “ICT is enabling energy reduction through optimization, but ICT also consumes a lot of energy,” says Kyriakos Baxevanidis, deputy head of the European Commission’s Smart Cities and Sustainability unit within DG CONNECT. “If the cloud were a country, it would rank fifth in the world in terms of energy consumption.”Other highlights of the event included a brief presentation from Simon Woodman from the University of Newcastle, UK, on e-Science Central, as well as information given on the EU Brazil Cloud Connect project, which kicked off earlier this month. You can read more about e-Science Central in our in-depth interview, here, and we’ll have more on EU Brazil Cloud Connect in next week’s issue of iSGTW.Finally, it was announced at the event that that the popular Cloudscape conference series will soon be heading away from Brussels for the first time, with the first of Cloudscape Brazil conference set to be held later this year. Again, we’ll have more on this in next week’s issue…[...]



A Latin America Collage in High Performance and Large Scale Computing

2013-08-29T20:32:44.035+02:00

(image)
Speakers and Contributions in CLCAR 2013
CLCAR 2013 in San José Costa Rica, shown an interesting "collage" of the High Performance and Large Scale Computing activity in the continent. Diversity of users, important scientific contributions and one thing in common: collaboration.

Collaboration among all scientific and academic institutions allows to develop new ideas and proposals. More and more the interaction Europe-LatinAmerica and North America-LatinAmerica is open. 

Some of the contributions in this version of CLCAR are addressed to global interests and to resolve some technical problems and open questions in related areas. In Costa Rica, this year, inspired by the most green country in the continent, Bioinformatics and biodiversity are the common subjects in the most part of the projects.

On the other hand, researchers and HPC Latin American community have meet supported by RedCLARA and some european institutions as CETA CIEMAT and Barcelona Supercomputing Center (BSC) to continue with the development of the Advanced Computing Service for Latin America and Caribbean, SCALAC (from spanish/portuguese acronym). This important meeting is the second face to face journey to address a large scale -advanced computing facility today, with the experience of the past projects EELA, EELA-2 and GISELA.

CLCAR 2013 continues until tomorrow. GridCast and ISGTW are from 2007 media partners of this important activity in Latin America.



PURA VIDA from Costa RICA: Starting CLCAR 2013 with Tutorials

2013-08-27T01:33:00.251+02:00

(image)
CLCAR 2013 in San José Costa Rica
Pura vida is the traditional expression from many "good" thinks in Costa Rica. Acknowledgement, friendship, accordance, happiness... 

This year, the tutorial sessions of CLCAR 2013 began CLCAR with many "good topics": exploiting GPGPU architectures with OpenCL and CUDA, the BioCLCAR tutorial in HPC for biosciences and BONIC with LEGION tutorial. 

(image)
Attendees of CLCAR 2013 Tutorials 
Students and instructors from different countries of Latin America, joined to share knowledge in technologies, methodologies and collaboration experiences.  Starting from beginners students in large scale architectures to finish there are some minutes with a scratch level in HPC opportunities.

Tomorrow the version 2013 of CLCAR continues with the second day of the BioCLCAR tutorial and other specific subjects related with the exploitation of large scale architectures, e-science and collaboration. From 2007, the CLCAR is the "Conference" of high performance and scientific computing science of America Laitina.

The first day of CLCAR was a "PURA VIDA" day around collaboration, friendship and e-science".



A spotlight on the gut at XSEDE'13

2013-07-27T22:18:39.152+02:00

Possibly one of the most literally ‘in-depth’ talks I’ve attended at a computing conference came from Larry Smarr of the J. Craig Ventnor institute. He has got involved in biomedical research in the most personal way possible – by presenting his microbiome, or microbial profile, to be scrutinised in minute detail by researchers. The body has 10 times as many microbe cells as human cells. In fact, 99% of the DNA in your genome is in microbial cells not in human cells. “Topologically we’re a torus”, said Smarr. “Effectively, the bugs are all on the outside of your body, including your guts.”Smarr’s interest in the microbiome started with increasingly severe but misdiagnosed stomach problems. Smarr was not impressed with the guesswork involved in treating his symptoms. He thought DNA sequencing should give a much more accurate picture of what was going on, eventually leading to a diagnosis of early Crohn's disease, an inflammatory bowel condition. With the cost of sequencing a genome fallen from 4 billion dollars per genome, to 4000 dollars, financing the research is not so much the issue – it’s picking out the microbial from the human, and then identifying what you have left. Research using the Gordon supercomputer at SDSC still found a significant proportion of DNA that was ‘unknown’. What is clear though is that Crohn’s disease has a devastating effect on the gut bacteria – the equivalent to a ‘mass extinction event’ in Smarr’s words. The traditional medical approach of waging war on the gut microbes using heavy duty drugs was not going to help – the better approach is to switch from full frontal attack to ‘gardening’. This means using new therapeutical tools to manage the microbiome and encourage ‘good’ bacteria. Diet is also an important factor. “Change the feedstock and you’ll change the shape of the garden,” advised Smarr. As he’s still donating all sorts of raw materials to the research programme on a regular basis, he should know. (You can read more about microbial gardening on the CNN website – even the Economist has got in on the act with its cover page article ‘Microbes Maketh the Man’)Biosciences Day at XSEDE’13 closed with a lively panel session featuring speakers from a range of high tech biomedical companies, including at least one lawyer. This is not as strange as it first sounds, because many of the issues affecting biomedical research come down to the ethics of sharing very personal data. “If you surprise a bioethicist with a question, the answer is always no” said Glen Orter of Dell. Alex Dickinson of Illumina Inc looked at the role of cloud computing in genomics, including two killer apps – whole genome assembly and variant calling, as well as single click sharing of very large data sets. His wish lists included cloud bioinformatics featuring openness and collaboration. “Interpretation is now the key, since analysis is no longer the bottle neck,” said Dickinson. This means thinking big and looking at phenotypes (how genes are expressed in the body) not just genotypes. “We want whole genomes and lots of them,” he announced.Donald Jones of Qualcomm life talked about connected data, from wherever it orginates, inside the body, outside the body or from instruments. To bring in a Star Trek reference, this is the ‘tricorder challenge' to find simple to use multimeters for monitoring the body, like the blood glucose meters already used by diabetics. In the future, we’re likely to see increasing numbers of health related apps. Darryl Leon from Life Technologies advised us to think holistically and also address the environmental costs of intensive computing – can we have low energy high energy com[...]



Biosciences Day at XSEDE'13

2013-07-27T21:38:29.061+02:00

After the excitement of the XSEDE’13 kick-off next door to Comic Con, and the glamour of a 1940’s themed Chair’s dinner aboard the USS Midway, we moved into Biosciences Day. Touring the Midway, we squeaked across the blue and white checked lino of the mess deck (hearty hot meals were served 23 hours out of every 24) and ducked into the cramped confines of the sick bay with its three tier stacked bunk beds. The operating theatre and dental surgery were right next door to the machine shop. To the untrained eye, the equipment in all three looked broadly similar. (Given the gender bias of the crew, I’ll leave the identity of the two most requested elective operations to your imagination. They weren’t on the teeth). The basic nature of the medical treatment on offer to the crew however was a timely reminder of how far medicine has come in the last half century, particular now that high performance computing has joined the medic’s armoury. David Toth, a Campus Champion at the University of Maryland Washington, talked us through the role XSEDE resources have played in finding potential drug treatments for histoplasmosis (a fungal infection), tuberculosis and HIV. In the days of the USS Midway, crew with a positive HIV test were immediately air lifted to shore, with a future behind a desk rather than at sea ahead of them. Toth’s group screened 4.1 million molecules using XSEDE resources, a task that would have taken 2 years on a single desktop PC. Some drug candidates are now being tested in the wet lab, with several promisingly showing the power to inhibit cell growth. “Supercomputers do a great job of outpacing the biologists,” said Toth. One enterprising feature of the work was that students each screened 10,000 molecules as part of their course, with one student striking lucky with a potential drug candidate.Looking at biomedical challenges more broadly, Claudiu Farcas from the University of California in San Diego summarised some of the issues posed by data. For a start, there are many different kinds of data, from the genotype, RNA and proteome, on to biomarkers and phenotypes, right up to population level data, all with their own set of acronyms. Public repositories are often poorly annotated and are mostly unstructured as well as governed by limited and complicated data use agreements. “Researchers are encouraged to share but are not always enabled to do so,” warned Farcas. A particularly thorny issue for biomedical data analysis is how to deal with sensitive personally identified information (PII). Researchers need to protect the privacy of patients by anonymising their data. It also needs to be cleaned up, compressed and aggregated so it can be analysed efficiently. But how best to do this? Bill Barnett, of Indiana University said earlier that biologists really don’t care what they use, as long as it works. Cloud computing can be tempting, but institutional clouds are often still at the early stages of being set up, and commercial sources might have “questionable” privacy protection.The iDASH (Integrating Data for Analysis, Anonymization and SHaring) portal allows biologists to focus on the biology, not the data. It includes add-ons such as anonymisation of data, annotation, processing pipelines, natural language processing, and the possibility to preferentially select speedier GPGPU resource. According to Farcas, iDASH offers a secure infrastructure plus a platform for development, services and software, including privacy protection.[...]



XSEDE’13 in San Diego – when super heros met supercomputers

2013-07-24T23:25:01.256+02:00

There aren’t too many conferences where you meet Batman, Dr Who and the Wicked Witch of the West while still checking into the hotel. XSEDE’13 in San Diego this year overlapped with the famous Comic Con event right next door – so caped super heros marching past in the lobby was apparently part of the deal. Comic Con attracts over 120,000 participants every year, XSEDE slightly fewer at 700. But this is an ever rising number year on year, as project leader John Towns was pleased to point out. And I have a suspicion that the categories of ‘comic book nerds’ and ‘IT geeks’ are perhaps not entirely mutually exclusive sets…

XSEDE, the Extreme Science and Engineering Discovery Environment, is a National Science Foundation-supported project that brings together supercomputers, data collections, and computational tools and services to support science and engineering research and education. The annual XSEDE conference focuses on the science, education, outreach, software, and technology used by the XSEDE and other related communities worldwide.

The programme kicked off with a day of well attended tutorials – with a 200-strong host of students at the event, the training opportunities were well appreciated, as was the opportunity to showcase their work in poster sessions and competitions. Even younger participants were catered for by a morning robotics class, “Ready, Set, Robotical” which I was more than tempted to join.

Training is always a strong theme at XSEDE, and the challenges of providing online courses in parallel computing were discussed, as well as developing undergraduate and graduate programmes in computational science. Campus champions are the backbone of outreach on a regional basis, and XSEDE is now looking to expand the scheme into subject specific champions. This emerged as a theme for future collaboration between XSEDE and the European Grid Infrastructure in the Birds of a Feather session. EGI recently set up an EGI Champions scheme, including representatives from fields as disparate as life sciences, environmental sciences and maths. Other areas where EGI and XSEDE expect to work together include federated high throughput data analysis, federated clouds and user support. One use case already in progress covers HPC-HTC workflows for the computational chemistry community. This was one of the joint use cases that emerged in the call issued at the beginning of the year. So there's lots to watch out for, even now the caped crusaders have (mostly) left town.



High Performance Computing, e-Science and Scalable Architectures Next a Volcano

2013-07-21T23:51:16.160+02:00


36 Students from Mexico and some countries of central america have begun the participation in the 2013 version of the Supercomputing and Distributed Systems Camp. This year, the SCCAMP is conducted at the Instituto Tecnológico de Colima, in the beautiful town of Villa de Alvarez in the Colima State, México. As in 2011, the SCCamp is developed near of a volcano: the Colima Volcano.

SCCamp is an initiative of researchers to offer to undergraduate and master students a state-of-the-art knowledge upon High Performance and Distributed Computing topics. In 2013 version, students will learn topics in 7 days summer school, starting from large-scale infrastructures (Cluster, Grid, Cloud) to CUDA Programming.  Instructors are from several countries in the world: Brazil, Colombia, France, Germany, Greece, Mexico and Venezuela.

Past SC-CAMP were made in Colombia (2010), Costa Rica (2011) and Venezuela (2012). Every year an interesting special topic is proposed and selected. The special topic of this year is the use of reconfigurable architectures to scientific applications. 

The SCCAMP 2013  is supported by some international partners.  GridTalk and iSTGW are media partners of SCCAMP.



iMENTORS goes live!

2013-07-18T17:16:56.808+02:00

Mapping ICT accross Sub-Saharan AfricaBrussels, 18/07/2013– iMENTORS goes live and is one step closer to becoming the most comprehensive crowdsourcing map on ICT infrastructures in Sub-Saharan Africa! Users are now able to register and create their profile on the platform and start sharing their knowledge and data directly on the map.Co-funded by the European Commission’s DG CONNECT under the Seventh Framework Programme (FP7), iMENTORS www.imentors.eu is designed to enhance the coherence and effectiveness of international actors involved in e- infrastructures development projects and initiatives in Sub-Saharan Africa. iMENTORS launched in April 2012 by Stockholm University and Gov2u is a web-based platform serving as a knowledge repository for sharing and aggregating data on e-infrastructure projects throughout sub-Saharan Africa. e-Infrastructures– electronic research infrastructures – are collections of ICT based resources and services used by the worldwide research and education community to conduct collaborative projects and generate, exchange and preserve knowledge.iMENTORS strengthens the development of European policy in developing research infrastructures by providing donors and policy makers with a comprehensive tool to test hypotheses and understand trends in the field of e-infrastructures development assistance across Sub-Saharan African countries. It increases the value of data by providing more descriptive information about international e-infrastructure development and strengthens efforts to improve donors and recipients strategic planning and coordination.  By identifying and mapping the majority of e-infrastructure projects in Sub-Saharan Africa, the project provides valuable knowledge of what is currently on the ground which in itself represents the first step to more informed future choices and decision making on e- infrastructures deployment. Additionally, the software tool deployed assists policy-makers and donor agencies in utilizing the information more effectively, by filtering and associating it with other variables to identify on trends in aid flows, areas of interests, allowing them to make more informed decisions.  The platform also pushes for stronger cooperation between the different e-infrastructures stakeholder groups, hence providing decisive support in their efforts to develop globally connected and interoperable e-infrastructures.  The dissemination activities that are planned in this project have raised the visibility of the e-infrastructures activity in Sub-Saharan African Region towards wider audiences, especially among the research communities from EU countries and international development aid agencies. By engaging key e-infrastructures stakeholders at the recipient country level, the project triggers debates on the relative effectiveness of donors and create the impetus to move to more effective coordination mechanisms. For more information visit: www.iMENTORS.eu [...]



Asynchronous Parallel Computing Programming School in Bucaramanga, Colombia

2013-07-01T17:21:06.840+02:00



With more than 70 students from different South America countries and the support of the Barcelona Supercomputing Center, Spain (BSC), the Asynchronous Computing Programming with MPI/OMPSs addressed to hybrid architectures school is developed in Bucaramanga, Colombia.  The school is organized by the High Performance and Scientific computing Laboratory of the Universidad Industrial de Santander (SC3 UIS) in Bucaramanga, Colombia, and continues until next Friday 5.

The school search to diffuse specific programming competences to the researchers, engineers and students which interact with the Latin-American and Caribbean Service of advanced computing (SCALAC from Spanish/Portuguese acronym), specifically with hybrid architectures as GUANE-1 the main HPC platform of the SC3 UIS (http:// sc3.uis.edu.co )
Several applications to test in this school, are related with particular uses in science and engineering, for example, weather applications, bioinformatics and computational chemistry, astrophysics, condensed matter, energy and seismic. SCALAC joint all Grid and Advanced Computing experience received in some projects developed in the last 10 years in Latinamerica.


More information: http://www.redclara.net/indico/evento/ompss 



iSGTW teams up with NUANCE to increase coverage of Africa

2013-06-28T10:28:31.625+02:00

iSGTW is extremely pleased to announce that it has signed a memorandum of understanding with NUANCE, allowing the limited sharing of some content between the two publications.

NUANCE stands for ‘The Newsletter of the UbuntuNet Alliance: Networks, Collaboration, Education’ and is a publication we at iSGTW hold in high regard for its excellent coverage of national research and education networks (NRENs) in Africa.

At iSGTW, we hope that this exciting new partnership will allow us to increase our coverage of this region, where many exciting developments in the world of e-infrastructures are currently taking place.


You can read the latest edition of NUANCE on the UbuntuNet alliance website, here.



"Moore's Law is alive and well" — but is that enough?

2013-07-03T17:56:23.250+02:00

On Monday, with the announcement of the new Top 500 list of the world's fastest supercomputers, we wrote briefly about the challenges computer scientists across the globe face in achieving exascale supercomputers by the end of the decade. To put the scale of this challenge into perspective, China's Milky Way 2 supercomputer, the fastest in the world today by a significant margin, is capable of reaching 34 petaFLOPS. Plus, there's the small matter of energy efficiency still to tackle if exascale supercomputers are going to become a realistic proposition.

Yesterday evening, Stephen S. Pawlowski of Intel gave a keynote speech at ISC'13 entitled 'Moore's Law 2020'. "People are always saying that Moore's Law is coming to an end, but transistor dimensions will continue to scale two times every two years and improve performance, reduce power and reduce cost per transistor," he says. "Moore's Law is alive and well."

"But getting to Exascale by 2020 requires a performance improvement of two times every year," Pawlowski explains. "Key innovations were needed to keep us on track in the past: many core, wide vectors, low power cores, etc."

"Going forward, scaling will be as much about material and structure innovation as dimension scaling". He cites potential future technologies, such as graphene, 3D chip stacking, nanowires, and photonics, as ways of achieving this.

Pawlowski argues for less focus on achieving a good score on the Top 500 list by optimising performance for the Linpack benchmark. Instead, he says, there needs to be more focus on creating machines suited to running scientific applications. "Moore's Law continues, but the formula for success is changing," concludes Pawlowski.



Top of the FLOPS at ISC’13

2013-07-03T17:55:42.621+02:00

This week, almost 2,500 experts from industry, research, and academia have gathered in the German city of Leipzig for International Supercomputing Conference ’13 (ISC’13). The event played host today to the announcement of the new TOP500 list of the fastest supercomputers in the world. Milky Way 2 (known also as Tianhe-2), located at the National University of Defense Technology (NUDT) in Changsha, China, was announced the new winner. “The Milky Way 2 project lasted three years and required the work of more than 400 team members,” says Kai Lu, vice dean of the School of Computer Science at NUDT. Boasting over 3 million cores and with a peak performance of around 34 petaFLOPS on the Linpack benchmark, Milky Way 2 is nearly twice as fast as the previous winning supercomputer, Titan, at Oakridge National Laboratory, US. Titan has now slipped to number two spot on the list, with another US-based supercomputer, Sequoia, located at Lawrence Livermore National Labs, completing the top three. JuQUEEENat the Jülich Supercomputing Centre in Germany was ranked as the fastest machine in Europe.

“Our projections still point towards reaching exascale systems by around 2019,” says Erich Strohmaier of the US Department of Energy’s Lawrence Berkley National Laboratory, who gave an overview of the highlights of the new Top 500 list. Strohmaier, however, warns that increasing the power efficiency of supercomputing systems will continue to be a major challenge over the coming years: “If we don’t start to have some new ideas about how to build supercomputers, we will truly be in trouble by the end of the decade.”


 “If you actually look at what people want to do, an exaflop is still not enough,” says Bill Dally of NVIDIA and Stanford University, California, US.  He capped off this morning’s programme with a keynote speech on the future challenges of large scale computing. “The appetite for performance is insatiable,” he says, citing work in a number of research fields as evidence that performance is currently still the limiting factor in terms of the exciting science which can potentially be done. “If we provide increased performance, people will always find interesting things to do with it.”



Latinamerican High Performance and Grid Computing Community calls for contributions to CLCAR 2013 in San José Costa Rica

2013-06-17T05:01:30.474+02:00

The Latinamerican Conference on High Performance Computing (CLCAR, from spanish acronym) 2013 will be held this year in San José, Costa Rica. Since 2007, the Latin-American Conference on High Performance Computing (CLCAR) is an event for students, scientists and researchers in the areas of high performance computing, high throughput computing, parallel and distributed systems, e-science and applications, in a global context, but with special scope in latinoamerican propositions. GridCast is media-partner of this latinamerican activity.

The program and scientific committees are formed by experts and researchers from different countries and related domains. Competent people from various countries and institutes will carry out the process of evaluating the proposals. CLCAR 2013 to be held in San José,  Costa Rica, in August 26-30. 

CLCAR 2013 official languages are English, Portuguese and Spanish. People willing to present their proposals can present them mainly in two forms: Oral Presentations (Full Paper) and Posters (Extended abstract) until Sunday, June 23.

This year there are two activities proposed inside the conference: the first one, the bioinformatic and biochemistry researchers propose the bio-CLCAR, and the second, the CLCAR scientific visualization challenge. 

Different Proposals can be  submitted in  ENGLISH, PORTUGUESE or SPANISH only (full papers and extended abstracts).  Papers written in Spanish or Portuguese should have the title and its abstract in english too. The oral presentation may be in any CLCAR official language, but the slides will be in English anyway.  Selected posters from extended abstracts must be show in English.

For more information about CLCAR 2013, please visit the official site: www.clcar.org




Praise for PRACE and the importance of building expertise in HPC

2013-06-17T00:12:50.686+02:00

Yesterday, the PRACE Scientific Conference was held in Leipzig, Germany. It is one of several satellite events taking place alongside ISC'13, which gets underway in full today.After a brief welcome address from Kenneth Ruud, chairman of the PRACE Scientific Steering Committee, Kostas Glinos, head of the European Commission's eInfrastructures unit, spoke about the vision for HPC in Horizon 2020."HPC has a fundamental role in driving innovation, leading to societal impact through better solutions for societal challenges and increased industrial competitiveness," says Glinos. "It's not just about exascale hardware and systems, but about the computer science needed to have a new generation of ICT.""Only very few applications using HPC really take advantage of current petaFLOPS systems," he adds. "New computational methods and algorithms must be developed, and new applications must be reprogrammed in radically new ways." In addition, Glinos  highlighted the importance of public procurement of commercial systems for developing the next generation of IT infrastructures, which you can read more about in the recent iSGTWarticle ‘Golden opportunities for e-infrastructures at the EGI Community Forum’.Finally, he spoke about the conclusions of the recent EU council for competitiveness: "HPC is an important asset for the EU... and the council acknowledges the very good achievements of PRACE over the years." For Horizon 2020, Glinos says: "We want to build on PRACE's achievements to advance further integration and sustainability." He argues for the importance of an EU-level policy in HPC addressing the entire HPC ecosystem, saying that the sum of national efforts is not enough – "we need to exchange and share priorities."The conclusions of the EU council for competitiveness were also highlighted by Sergi Girona, chair of the PRACE board of directors. "We have to work together because we want to support science and industry, the development of HPC in Europe, and the development and training of persons," he says. During his talk, Girona also gave an overview of PRACE in numbers: with its 25 member countries, PRACE has a budget of €530m for 2010-2015, including €70m of funding from the European Union. Girona explains that PRACE has now awarded more than 5 billion computation hours since 2010 and is currently providing resources of nearly 15 petaflops.However, he emphasises that PRACE is about much more than simply providing access to HPC resources. "We don't just want to give access to computing resources; we want to support users at all stages – it is key to train people," he says. "We have created six training centres in Europe and have approved a curriculum with 71 PRACE advanced training centre courses for this year."The importance of training was also highlighted by Glinos: "We need more expertise, so we intend to support a limited number of centres of excellence. Topics may relate to scientific or industrial domains, such as climate modelling or cancer research for example, or they may be 'horizontal', addressing wider challenges which exist in HPC. These centres of excellence need to be led by the needs of the users and the application owners."Following Girona's talk, Wolfgang Eckhart of the Technical University of Munich, Germany, gave a presentation on his research in the field of molecular dynamics. He and his colleagues have been selected as winners of the PRACE ISC Award for their paper entitled '591 TFLOPS Multi-Trilli[...]



IT as a Utility in Emerging Economies

2013-06-12T16:00:44.606+02:00

Mobile is critical for IT in emerging economies.(CC-BY-NC-SA AdamCohn, Flickr)The ITaaUN+[1] workshop on IT as an infrastructure in emerging economies attracted social activists-cum-academics, academics-cum-industrial consultants, linguists, digital humanitists, and technology visionaries to the Association of Commonwealth Universities (ACU), where ACU Secretary General, John Wood, played host to the compact but vocal group. The agenda was to discuss the challenges and opportunities of IT, seen as a utility, in the majority world. John Wood is himself a veteran of e-infrastructures in the UK, having being Chief Executive of the Council for the Central Laboratory of the Research Councils, where he was responsible for RAL and Daresbury. Later, he held positions at Imperial – first as Principal of the Engineering Faculty, and later as Senior International Advisor. He now sits on the board of JISC and the British Library, and has advised numerous governmental and corporate organisations across the globe. But his experience at the ACU gives him a unique perspective on infrastructures that are in place already in the Commonwealth countries that are also developing countries (assuming provision of computational infrastructure in higher education institutions is an accurate barometer of infrastructure elsewhere in countries, which it usually is, to some degree).Why the service/utility distinction though?Jeremy Frey, Physical Chemist at the University of Southampton and one of the minds behind Chemistry2.0 application CombeChem, explained that there is a natural progression of a technology as it becomes part of the fabric of our lives. The transition: Revelation > Innovation > Specialist Tool/resource > Service > Utility – is one that the utilities that form the infrastructure of daily activity, such as electricity and telecommunications, have already progressed along. IT, and especially networked IT, is somewhere between the last two stages (and there will continue to be debates about whether and to what extent all utilities should be services or utilities, depending on the economic model in place). But, as the UN’s World Summit on Information Society (WSIS) has suggested, internet access is beginning to be established by governments as a basic human right, and so it will become increasingly considered a utility rather than a service – something we need rather than just something we want. And the impact that this will have will be felt nowhere more so than in the emerging economies of the developing world.In 1965 the centre of gravity of global wealth was located on the plains of La Mancha, in Spain. This was, however, the last time that this topographical El Dorado[2] lay in Europe. As the years have passed, the centre of international wealth has meandered out across the Mediterranean, and zig-zagged through northern Algeria and Tunisia. Having bounced off Malta and back towards Tunis, it is now poised to zoom eastward once more, skirting past northern Libya to settle in the middle East some time around the middle of this century (remaining oil reserves notwithstanding).The economic and social conditions that led to Europe being the dominant force in the world over the last few centuries can be attributed to a combination of factors: a great abundance of natural resources – particularly wood, coal and iron – suited to the manufacture of ever-more-complex tools; the concentration of histo[...]



Jazz music and Big Data at the TERENA Networking Conference 2013

2013-06-05T10:46:40.483+02:00

Nearly 20 years ago, the European Union came into being – this week’s TERENA Networking Conference is taking place in Maastricht where the treaty that created it was signed. Regardless of whether you see the EU as a positive or negative entity in today’s cash strapped times, it’s appropriate that the Trans-European Research and Education Networking Association should meet here to discuss the next wave of Telco innovation. Hopefully this will mean a few extra Euros will be on their way to all of us in the future.The Opening Reception on Monday featured a jazz performance inspired by the theme of the conference - "Innovating Together". Thanks to an international collaboration of artists and technicians, on-stage musicians performed alongside their 'virtual' band leader, who was in Edinburgh, UK, assisted by LoLA (LOw LAtency audio/visual system). LoLA was developed by GARR, the Italian research and education networking organisation, and the Music Conservatory G. Tartini in Trieste. Using LoLA, performing artists are able to interact in a natural way even if they are thousands of kilometres apart, relying on the high-quality and very large bandwidth connectivity offered by research and education networks which minimise network-related delay and jitter. One source of innovation is likely to be data, and lots of it. In a session called ‘Big Data, Big Deal!’, Harold Teunissen of SURFNet looked at the big data problem. He noted that after the arrival of his twins, he found himself faced by a big data problem of his own – over 30,000 family snaps to share, store and manage. “These two changed my big data perspective for ever,” he said.In the Netherlands, all ICT activities for Higher Education and Research are now brought under the SURF umbrella including cloud, supercomputing and the esciencecentre. But what is big data? NRENs generate 0.3% of worldwide data. Is this data actually big data, or is mainly people using Facebook, Twitter etc. We don’t know. Research is now seen as a generator of big data, but the cost of generating it can be very high. For example, Teunissen noted that 10 billion dollars spent on the Large Hadron Collider to arrive at one bit of information might be seen as a lot of money by some i.e. proving that Higgs particle exists = true. Of course, the story of the LHC and its research is a lot more complex than that.Big data actually means large volume, generated at a high velocity and in many different forms, such as video, text and images. What customers need to handle this data tends to either be technology and performance OR solutions and ease of use, depending on whether or not they are early adopters.Taking up the LHC theme, William Johnston of ESnet looked at high energy physics as a prototype for data intensive science, now paying dividends for the teams working on the  Square Kilometre Array and genomics for example. Growth in scientific data follows Moores Law, leading to exponential growth. However, when looking for technological solutions to big data problems, commercial solutions may not be up to the task.“Software testing started 5 years before the LHC turned on. Science is not YouTube and has special requirements,” said Johnston. Simon Leinen of Swiss NREN, SWITCH asked whether we should in fact make science more like YouTube, so explore using the cloud and existing services? Johnston responded that HEP is looking at[...]