[We apologize for multiple postings]
Academia Sinica, Taipei, Taiwan
Invitation to Participate
The International Symposium on Grids and Clouds (ISGC) 2013, hosted by Academia Sinica Grid Computing Centre (ASGC), will be held at Academia Sinica in Taipei, Taiwan, from 17 to 22 March 2013, with co-located events and workshops. The Program Committee cordially invites your participation by taking part in the Call for Abstracts and/ or Call for Workshops.
The theme of ISGC 2013 is Collaborative Simulation, Modelling and Data Analytics in Grids and Clouds.
ISGC 2013 will bring together from the Asia-Pacific region and around the world, researchers that are developing applications to produce these large-scale data sets and the data analytics tools to extract the knowledge from the generated data, and the e-infrastructure providers that integrate the distributed computing, storage and network resources to support these multidisciplinary research collaborations. The meeting will feature workshops, tutorials, keynotes and technical sessions to further support the development of a global e-infrastructure for collaborative Simulation, Modeling and Data Analytics.
Call for Workshops– Before 31 October 2012, please submit the workshop proposals to the Secretariat (Ms. Catherine Wang email@example.com). The “CFW Submission Form” is in the attachment and can be downloaded here.
Call for Abstracts– Before 16 November 2012, please submit abstracts through the Indico at http://indico3.twgrid.org/indico/conferenceDisplay.py?confId=357
Topics of Interest
Applications and results from the Virtual Research Communities and Industry:
1. Physics (including HEP) & Engineering Applications
Submissions should report on experience with physics and engineering applications that exploit grid and cloud computing services, applications that are planned or under development, or application tools and methodologies. Topics of interest include:
-End-user data analysis
-Management of distributed data;
-Applications level monitoring;
-Performance analysis and system tuning;
-Management of an experimental collaboration as a virtual organization;
-Comparison between grid and other distributed computing paradigms as enablers of physics data handling and analysis;
-Expectations for the evolution of computing models drawn from recent experience handling extremely large and geographically diverse datasets.
2. Biomedicine & Life Sciences Applications
During the last decade, Biomedicine and Life Sciences have dramatically changed thanks to the use of High Performance Computing and highly Distributed Computing Infrastructures such as grids and clouds.
Submissions should concentrate on practical applications in the fields of Biomedicine and Life Sciences, such as:
3. Earth & Environmental Science & Biodiversity
Today, it is well understood that precise, long-term observations are essential to quantify the patterns and trends of on-going environmental changes, and that continuously evolving models are needed to integrate our fundamental knowledge of processes with the geospatial and temporal information delivered by various monitoring activities. This makes it critically important that the environmental sciences community should put a strong emphasis on analysing the best practices and adopting common solutions on the management of heterogeneous data and data flows.
Natural and Environmental sciences are placing an increasing emphasis on the understanding of the Earth as a single, highly complex, coupled system with living and dead organisms. It is well accepted, for example, that the feedbacks involving oceanic and atmospheric processes can have major consequences for the long-term development of the climate system, which in turn affects biodiversity, natural hazards and can control the development of the cryosphere and lithosphere. Natural disaster mitigation is one of the most critical regional issues in Asia
Despite the diversity of environmental sciences, many projects share the same significant challenges. These include the collection of data from multiple distributed sensors (potentially in very remote locations), the management of large low-level data sets, the requirement for metadata fully specifying how, when and where the data were collected, and the post-processing of those low-level data into higher-level data products which need to be presented to scientific users in a concise and intuitive form.
This session would in particular address how these challenges are being handled with the aids of e-Science paradigm.
4. Humanities & Social Sciences Applications
Researchers working in the social sciences and the humanities have started to explore the use of advanced computing infrastructures such as grids to address the grand challenges of their disciplines. For example, social scientists working on issues such as globalization, international migration, uneven development and deprivation are interested in linking complementary datasets and models at local, national, regional and global scales.
Similarly, in the humanities, researchers from a wide range of disciplines are interested in managing, linking and analyzing distributed datasets and corpora. There has been a significant increase in the digital material available to researchers, through digitization programmes but also because more and more data is now "born digital".
As more and more applications demonstrate the successful application of e-Research approaches and technologies in the humanities and social sciences, questions arise as to whether common models of usage exist that could be underpinned by a generic e-Infrastructure. The session will focus on experiences made in developing e-Research approaches and tools that go beyond single application demonstrators. Their wider applicability may be based on a set of common concerns, common approaches or reusable tools and services. We are also specifically inviting contributions concerned with teaching e-Research approaches at undergraduate and postgraduate levels as well as other initiatives to "bridge the chasm" between early adopters the majority of researchers.
Activity to enable the provisioning of a Resource Infrastructure
5. Infrastructure & Operations Management
This session will cover the current state of the art and recent advances in managing the internal operation of large scale research infrastructures and the interactions between them. The scope of this track will include advances in high-performance networking (including IPv4 to IPv6 transition), monitoring tools and metrics, service management (ITIL and SLAs), security, improving service and site reliability, interoperability between infrastructures, user and operational support procedures, and other topics relevant to provide a trustworthy, scalable and federated environment for general grid and cloud operations.
6. Middleware & Interoperability
Middleware technology is an inevitable cornerstone of modern federated Grid and Cloud infrastructures. Their robustness, scalability and reliability are of major importance to support academic and business infrastructure users in gaining new scientific insights or increasing their revenues. Until recently middleware technologies were developed from specific requirements of certain communities and use cases. Today middleware technologies must converge by employing open standards to enable interoperability among technologies and infrastructures or to re-use components from other technologies – convergence, collaboration and innovation is and must be a key element of this endeavor. Therefore submissions should highlight their contribution to the convergence, collaboration and innovation of interoperable middleware technologies for federated IT-infrastructures. Topics of interest include but are not limited to:
7. Infrastructure Clouds & Virtualisation
This track will focus on the use of Infrastructure-as-a-Service (IaaS) cloud computing and virtualization technologies in large-scale distributed computing environments in science and technology. We solicit papers describing underlying virtualization and "cloud" technology, scientific applications and case studies related to using such technology in large scale infrastructure as well as solutions overcoming challenges and leveraging opportunities in this setting. Of particular interest are results exploring usability of virtualization and infrastructure clouds from the perspective of scientific applications, the performance, reliability and fault-tolerance of solutions used, data management issues. Papers dealing with the cost, price, and cloud markets, with security and privacy, as well as portability and standards, are also most welcome.
8. Business Models & Sustainability
Whenever a business is established, it employs a particular business modelthat describes the architecture of the value (economic, social, etc.) creation, delivery, and capture mechanisms employed by the business enterprise. Business models are used to describe and classify businesses (especially in an entrepreneurial setting), but they are also used by managers inside companies to explore possibilities for future development. Business models are also referred to in some instances within the context of accounting for purposes of public reporting.
Sustainabilityis the capacity to endure and interfaces with economics through the social and ecological consequences of economic activity. Among the many ways of living more sustainably, one can cite the use of science to develop new technologies (green technologies, renewable energy, or new and affordable cost-effective practices) to make adjustments that conserve resources.
These two concepts apply to the e-infrastructure world and the purpose of this session will be to report about existing or foreseen initiatives aiming at guaranteeing the long-term sustainability of e-Infrastructures by means of business models.
Technologies that provide access and exploitation of different site resources and infrastructures
9. Data Management
Data management encompasses the organization, distribution, storage, access, and validation of digital assets. Data management requirements can be characterized by data life stages that include shared project collections, to formally published libraries, to preservation of reference collections. Papers are sought that demonstrate the management of data through the multiple phases of the scientific data life cycle, from creation to re-use. Of particular importance are demonstrations of systems that validate assertions about collection properties, including integrity, chain of custody, and provenance.
10. Managing Distributed Computing Systems
This track will highlight the latest research achievements in interoperability between commercial clouds, conventional grids, desktop grids and volunteer computing. The topics will cover new technologies of the related software frameworks, recent application developments, as well as infrastructure operation and user support techniques for all levels: campus, institutional, and for very large scale cyberscience computing.
Special focus will be on the following areas:
11. High Performance & Technical Computing (HPTC)
With the growing availability of computing resources such as public grids (e.g., EGI and OSG) and public/private clouds (e.g., Amazon EC2), it has becomes possible to develop and deploy applications that exploit as many computing resources as possible. However, it is quite challenging to effectively access, aggregate and manage all available resources that are usually under control by different resource providers. This session will solicit recent research and development achievements and best practices in exploiting the wide variety of computing resources available. HPTC resources include dedicated High Performance Computing (HPC), High Throughput Computing (HTC), GPUs and many-core systems.
The topics of interest include, but are not limited to the followings:
· Experiences, use cases and best practices on the development and operation of large-scale HPTC applications;
· Delivery of and access to HPTC resources through grid and cloud computing (as a Service) models.
· Integration and interoperability to support coordinated federated use of different HPTC e-infrastructures;
· Use of virtualization techniques to support portability across different HPTC systems;
· Robustness and reliability of HPTC applications and systems over a long-time scale
12. Big Data Analytics
Characterized as Volume, Variety, Velocity (V3) by commercial sector, Big Data could not be easily dealt with by popular Relational Databases. The deluge of data is forcing a new generation scientific process and the discovery mechanism. Big Data in Big Science is not only large in scale but also globally distributed, as data may come from different sources, formats, workflows, and speed (real time). It is transforming all industries, resulting in innovation in infrastructure, compute and storage hardware, data analytics and algorithms, and a wide range of software applications and services. This track aims to invite innovative research, application and technology on big data. Submissions should focus on both the conceptual modelling as well as techniques related to big data analytics.