ACM International Conference on Computing Frontiers 2012 Invited Speakers

 

Computing Frontiers 2012 Invited Speakers

Keynote

Speaker: Moray McLaren

Title: Towards Truly Integrated Photonic and Electronic Computing

Abstract: The long heralded transition of photonic technology from a rack to rack interconnect to an integral part of the system architecture is underway. Silicon photonics, where the optical communications devices are fabricated using the same materials and processes as CMOS logic, will allow 3D or monolithically integrated devices to be created, minimizing the overhead for moving between the electronic and photonic domains. System architects will then be free to exploit the unique characteristics photonic communications such as broadband switching and distance independence. Photonic interconnects are very sensitive to the performance of connectors, and so may favor architectures where redundancy and reconfiguration are used in preference to replacement.

Bio: Moray McLaren is a Distinguished Technologist with HP labs, working in the Intelligent Infrastructure Lab. His recent research activities have focused on the impact of nanophotonics on future computer architectures. The two main areas of study have been high speed networking, and memory architectures. Prior to joining HP Labs in January 2007, he worked on the development of high speed interconnects for parallel processors. These interconnects were successfully deployed in a significant number of supercomputing systems around the world. He holds a number of patents in the area of high speed network interconnect design. His previous experience also includes the development of parallel systems architectures, and CMOS microprocessors. He holds a 1st class honors degree in microelectronics from the University of Edinburgh.

Special Session on Computer Intelligence in Games

Presentation A

Speaker: Daniele Loiacono

Title: Learning, Evolution and Adaptation in Racing Games

Abstract: Modern racing games offer a realistic driving experience and a vivid game environment. Accordingly, developing this type of games involves several challenges and requires a large amount of game contents. Computational intelligence represents a promising technology to deal effectively with such challenges and, at the same time, to reduce the cost of the development process. In this talk, I provide an overview of the most relevant applications of computational intelligence methods in the domain of racing games. In particular, I show that computational intelligence can be successfully applied (i) to develop highly competitive non-player characters,(ii) to design advanced racing behaviors such as overtaking maneuvers, and (iii) to automatically generate tracks and racing scenarios.

Presentation B

Speaker: Georgios Yannakakis

Title: Game AI Revisited

Abstract: More than a decade after the early research efforts on the use of artificial intelligence (AI) in computer games and the establishment of a new AI domain the term ''game AI'' needs to be redefined. Traditionally, the tasks associated with game AI were revolved around non player character (NPC) behavior at different levels of control, varying from navigation and pathfinding to decision making. Commercial-standard games developed over the last 15 years and current game productions, however, suggest that the traditional challenges of game AI have been well addressed via the use of sophisticated AI approaches, not necessarily following or inspired by advances in academic practices. The marginal penetration of traditional academic game AI methods in industrial productions has been mainly due to the lack of constructive communication between academia and industry in the early days of academic game AI, and the inability of academic game AI to propose methods that would significantly advance existing development processes or provide scalable solutions to real world problems. Recently, however, there has been a shift of research focus as the current plethora of AI uses in games is breaking the non-player character AI tradition. A number of those alternative AI uses have already shown a significant potential for the design of better games.

This talk will present four key game AI research areas that are currently reshaping the research roadmap in the game AI field and evidently put the game AI term under a new perspective. These game AI flagships include the computational modeling of player experience, the procedural generation of content, the mining of player data on massive-scale and the alternative AI research foci for enhancing NPC capabilities.

Presentation C

Speaker: Simon Lucas

Title: Towards More Intelligent Adaptive Video Game Agents: A Computational Intelligence Perspective

Abstract: This talk provides a computational intelligence perspective on the design of intelligent video game agents. The talk explains why this is an interesting area to research, and outlines the most promising approaches to date, including evolution, temporal difference learning and Monte Carlo Tree Search. Strengths and weaknesses of each approach are identified, and some research directions are outlined that may soon lead to significantly improved video game agents with lower development costs.

Presentation D

Speaker: Kenneth Stanley

Title: How AI Can Change the Way We Play Games

Abstract: While artificial intelligence (AI) in games is often associated with enhancing the behavior of non-player characters, at its cutting edge AI offers the potential for entirely new kinds of gaming experiences. In this talk I will focus on this frontier of AI in games through three examples of games from my research that are not only enhanced by AI, but would not even be possible without the unique AI techniques behind them. In these experimental games, called NERO, Galactic Arms Race, and Petalz, players become teachers, AI creates its own content, and unique creations are explicitly bred and traded by the players themselves. The discussion will focus on the inspiration for the technologies behind these games (including some related applications) and the long-term implications of unique and creative AI algorithms for gaming.

Special Session on Exascale in Europe

Presentation A

Speaker: Adam Carter

Title: CRESTA - a software focussed approach to exascale co-design

Abstract: The CRESTA project is one of three complementary exascale software projects funded by the European Commission. The three-year project is employing a novel approach to exascale system co-design which focuses on the use of a small, representative set of applications to inform and guide software and systemware developments. This methodology is designed to identify where problem areas exist in applications and to use that knowledge to consider different solutions to those problems which inform software and hardware advances. CRESTA uses a methodology of either incremental or disruptive advances to move towards solutions across the whole of the exascale software stack.

Presentation B

Speaker: Roberto Giorgi

Title: TERAFLUX, Exploiting Dataflow Parallelism in Teradevices

Abstract: The TERAFLUX project is a Future and Emerging Technologies (FET) Large-Scale Project funded by the European Union. TERAFLUX is at the forefront of major research challenges such as programmability, manageable architecture design, reliability of many-core or 1000+ core chips. In the near future, new computing systems will consist of a huge number of transistors - probably 1 Tera or 1000 billions by 2020: we name such systems as "Teradevices".

Most recent updates in the worldwide scenario include the availability of a new type of transistor (3D transistor), which marks the biggest change in the semiconductor industry since 1948 with the introduction of the transistor itself. New materials like Graphene may allow even greater power saving. The technology-node scaling has reached 22nm, with 14nm silicon foundries to be operative by 2013, and it seems the pace will continue at least until 8nm. The 3D layering gives new lymph to the Moore's law too. In this scenario, the TERAFLUX project brings together 10 industrial and academic partners to give their best contribution in order to find a common ground to solve at once all the above three challenges. The research in this project is inspired by the Dataflow principle. As recalled by Jack Dennis, dataflow is "a Scheme of Computation in which an activity is initiated by presence of the data it needs to perform its function. We believe that, if properly exploited, dataflow can enable parallelism which is orders of magnitude greater than what is achievable by control-flow dominated execution models. To investigate our concepts, we are studying dataflow principles at any level of a complete transformation hierarchy, starting from general and complex applications able to load properly a Teradevice through programming models, compilation tools, reliability techniques and architecture.

One key point it is also the evaluation of this system: our choice has been to rely on an existing simulation infrastructure (HPLabs COTSon) that immediately enabled us to start from a nowadays Teradevice (i.e., a 1000+ cluster of nodes, where each node consists of tens of cores) and progressively evolve such system into a more ambitious system where we can gradually remove major bottlenecks. While relying on solid and well-known reference points such as the x86-64 ISA, GCC tools, StarSs programming model and applications, we wish to demonstrate the validity of our research in such common evaluation infrastructure.

The system is not forced to follow entirely the dataflow paradigm: in fact, we distinguish among legacy and system-threads (L-, S-threads) and dataflow threads (DF-threads): this will allow for a progressive migration of programs to the new "dataflow paradigm", while accelerating the available DF-threads on the more dataflow-friendly cores. One other important choice is the exploration of synchronization mechanism such as transactional memory, and the repetition of threads running on failing cores by using the dataflow principles. We can currently afford to run with an acceptable slowdown and accuracy, parallel, scalable, full-system (with unmodified Linux) simulations of 1000+ x86-64 cores while experimenting with very ambitious changes in the execution model implying a major effort to support the execution model based on dataflow threads, especially from the compiler point of view.

Presentation C

Speaker: Axel Auweter

Title: DEEP: An Exascale prototype architecture based on a flexible configuration

Abstract: DEEP is a multipartner international cooperation project supported by the EU FP7 that introduces a flexible global system architecture using general purpose and manycore processor architectures (based on Intel MIC: many integrated core architecture). With XTOLL, DEEP uses a very powerful interconnection structure, which allows for the arrangement of different application oriented ratios between general purpose processor and accelerator. The project includes research and development on program technologies, tools, applications, and looks at energy efficient computing methodologies.

Presentation D

Speaker: Nikola Puzovic

Title: Mont-Blanc: Towards Energy-efficient HPC Systems

Abstract: This talk will present the Mont-Blanc project, an European initiative to build exascale systems using energy-efficient parts coming from the embedded market. The energy consumption of current general purpose and high-performance chips would require an unaffordable total power budget for an exascale system to be build using these parts.

The Mont-Blanc project aims to lower the total power of exascale systems by using parts from the embedded market which have a much higher FLOPS/Watt ration than traditional general purpose processor, at the cost of a lower peak performance per chip. Hence, exascale systems built using embedded parts would require a very high number of processors. In this context, overlapping communications and computations is key for applications to reach the system peak performance. This would require highly tuned application code which most users would not be able to afford.

The Mont-Blanc project heavily relies on the OmpSs programming model. OmpSs provide a simple parallel programming interface that most users can easily use, and an advanced runtime system that automatically overlaps computation and communication. Furthermore, the OmpSs runtime system is also able to dynamically adapt the load of each node to accomplish the overall system load balance.