Overview of Speakers and Programme for the BTD12 on June 7th, 2019

Artificial Intelligence

Building a Useful Chatbot: Beyond the ML and NLP (in English)

Dr. Andreea Hossmann, Principal Product Manager, Data Analytics & Artificial Intelligence, Swisscom

About two years ago, chatbots seemed to be the next big thing since mobile apps. In the meantime, things have cooled down a lot, with chatbots failing to deliver on the expectations. However, conversational AI is still moving forward in great strides. So, how can companies avoid the chatbot bubble and still achieve impact with the latest conversational technology?

Andreea Hossmann is a Principal Product Manager for Data, Analytics and AI at Swisscom. She is also a Venture Associate, working with Swisscom Ventures to assess AI startups worldwide. During her 3.5 years at Swisscom, Andreea was a Senior Data Scientist, before assembling and leading a Data Science team to work on AI topics, such as natural language understanding and search. She is an experienced researcher with a background in applied machine learning, network science and computer networking from her PhD education at ETH Zürich.

Building a Self-Driving RC Car (in English)

Bert Jan Schrijver, CTO, OpenValue

This session will share our experiences in converting a small remote controlled car into an autonomous driving vehicle. We'll talk about electronics, sensors, AI, computer vision and of course, the software that ties everything together. We'll introduce you into the world of self driving cars and compare our solution to the stuff that is done in the big leagues by the likes of Tesla's 'autopilot' and Waymo's self driving cars. We'll explain the challenges that have to be faced and the dilemma's that come with creating a car being driven by software in real world scenarios.

Bert Jan is CTO at OpenValue in the Netherlands and focuses on Java, Continuous Delivery and DevOps. Bert Jan is a Java Champion, JavaOne Rock Star speaker, Duke's Choice Award winner and leads NLJUG, the Dutch Java User Group. He loves to share his experience by speaking at conferences, writing for the Dutch Java magazine and helping out Devoxx4Kids with teaching kids how to code. Bert Jan is easily reachable on Twitter at @bjschrijver.

High-End Translation Hybrid Using Artificial Intelligence (in English)

Christopher Kränzler, CEO, lengoo.com

While recent approaches based on neural networks result in significant improvements in the quality of machine translations, human translators are playing an increasingly important role in introducing these technologies. In this lecture you will learn about the basics of machine translation systems. We examine the question which parameters promise the best translation quality and what is important when using machine translation in order to realize enormous efficiency gains in the professional translation environment.

Born and raised in scenic Bavaria, Christopher graduated from Karlsruhe Institute of Technology and holds a Master’s degree from Columbia University New York in Data Science. He has always been an advocate of data-driven decision-making processes and is an avid speaker on AI, digitalization and localization. Powered by a thorough understanding of, and an appreciation for data science in combination with a passion for languages, he founded the AI powered translation platform lengoo in 2013. Being the founder and CEO of lengoo, he is on a mission to usher the global localization industry into the age of digitalization and to direct the future of translation by combining cutting-edge Neural Machine Translation Network technology with the qualifications of expert linguists.

In this talk, I will give an introduction of Deep learning in the medical field, and in particular its application to the processing of medical images and genetic data. Medical Data Analytics is a field of Data Science which has its own specificities and challenges, such as data scarceness and variability, and where the black-box aspect is a big drawback. As a use case, we will present the work carried out at Konica Minolta and the TU Munich: starting from raw medical scans and patient data, we detect pathologies and extract relevant features automatically, in order to derive models and insight of medical interest.

Marie Piraud received a Ph.D. degree in Physics from Université Paris-Sud, Orsay, France in 2012 and consequently was a researcher at the Ludwig-Maximilian University of Munich and then at the Technical University of Munich, Germany.  In 2018, she joined Konica Minolta, as a senior researcher in digital healthcare, where she develops models for multi-modal data and deep learning computer vision techniques, applied to the the better understanding of medical data. She is also a guest researcher in the Image-Based Biomedical Modeling group of the Technical University of Munich.

Deepfakes in the SmartMirror - How Neural Networks are Changing our World

Martin Förtsch and Thomas EndresTNG

Imagine that you are standing in front of a mirror, but no longer see your own face, but through the eyes of Barack Obama or Angela Merkel. In real time, your own facial expressions are transferred to someone else’s face. The TNG Hardware Hacking Team has managed to create such a prototype and transfer a person's face to any other face in real time. The basis for this is the so-called deep-fake approach. The use of neural networks detects faces of video input, translates and integrates them back to the video output. Through this technique, it is possible to project deceptively real imitations to other people. For this purpose, we used Keras trained autoencoder networks and various face recognition algorithms. In this talk, Thomas Endres and Martin Förtsch give one entertaining and very vivid introduction to the world of deep fakes in real time. In doing so, they particularly focus on deep learning techniques used in this application.

Martin Förtsch is an IT-consultant of TNG Technology Consulting GmbH based in Unterföhring near Munich who studied computer sciences. Workwise his focus areas are Agile Development (mainly) in Java, Search Engine Technologies, Information Retrieval and Databases. As an Intel Software Innovator and Intel Black Belt Software Developer he is strongly involved in the development of open-source software for gesture control with 3D-cameras like e.g. Intel RealSense and has built an Augmented Reality wearable prototype device with his team based on this technology. Furthermore, he gives many talks on national and international conferences about Internet of Things, 3D-camera technologies, Augmented Reality and Test Driven Development as well. He was awarded with the Oracle JavaOne Rockstar.

In his role as an Associate Partner for TNG Technology Consulting in Munich, Thomas Endres works as an IT consultant. Besides his normal work for the company and the customers he is creating various prototypes - like a telepresence robotics system with which you can see reality through the eyes of a robot, or an Augmented Reality AI that shows the world from the perspective of an artist. He is working on various applications in the fields of AR/VR, AI and gesture control, putting them to use e.g. in autonomous or gesture controlled drones. But he is also involved in other open source projects written in Java, C# and all kinds of JavaScript languages. Thomas studied IT at the TU Munich and is passionate about software development and all the other aspects of technology. As an Intel Software Innovator and Black Belt, he is promoting new technologies like AI, AR/VR and robotics around the world. For this he received amongst others a JavaOne Rockstar award.

Artificial Intelligence in JavaScript using TensorFlow.js (in English)

Mathias Burger, TNG

TensorFlow.js can be used to develop machine learning models that run on Node.js or in the browser. Existing models can also be reused or retrained. Using WebGL, the framework provides vendor independent support for hardware acceleration and can even outperform CPU-bound training. By example, I will demonstrate how to use low-level APIs and how to build a gesture classifier gathering training data from the webcam.

Mathias Burger is a Senior Software Consultant at TNG Technology Consulting GmbH and focuses on proof of concept solutions using machine learning. He is very passionate about technological advancements and interested in the latest research, especially in the field of computer vision. When not coding, he likes to go cycling or reading fantasy or sci-fi books.

Next Generation Phenotyping using DeepGestalt in Clinic, Research and Variant Analysis (in English)

Yaron Gurovich, CTO, Face2Gene / FDNA

Facial analysis technologies have recently surpassed the capabilities of expert clinicians in syndromic phenotypes identification. To date, these technologies could only identify phenotypes of a few diseases, limiting their role in clinical settings, where hundreds of diagnoses must be considered. DeepGestalt uses computer vision and deep learning algorithms to highlight numerous genetic syndromes correlating with patients’ phenotypes analysed from unconstrained 2D images. DeepGestalt achieves 91% top-10-accuracy in identifying over 200 different genetic syndromes and has outperformed clinical experts in three separate experiments. We suggest that this form of artificial intelligence is ready to support genetics in Clinic, Research and Variant Analysis practices and will play a key role in the future of precision medicine. In this talk I will review DeepGestalt technology and will demonstrate its use in each aspect.

Yaron Gurovich is the Chief Technology Officer at FDNA. Have a wide experience in Computer Vision, Machine Learning and Deep Learning, both with research and production aspects. For the last 6 years, Yaron, invested the time to research and develop technology in the field of rare genetic disorders. Applying his knowledge and technological tools to promote this important field to the next levels of Next Generation Phenotyping.

Architecture & Design

5 years of Microservices - Lessons Learned

Alexander Heusingfeld, Head of Digital Architecture, Vorwerk International

In 2013, everything started with the talk „Programmer Anarchy” by Fred George. Is it possible to split large systems into smaller modules and to deploy them independently? Which technologies would be necessary? Which skills would our team need? Would there be only one team? How could we elude our operations-colleagues? I just ask myself why we want microservices anyway and what would happen if we actually roll it out. The network would be stable, wouldn’t it? Do I need to start with ‘Monolith First’ or with self-contained systems? Are microservices able to move dusty modernisation projects from BigBang to Build-Measure-Learn and Analyse-Evaluate-Improve while also leading to continuous improvement? In this talk, I want to share my experiences, war stories and learnings I gathered through several microservices projects, trainings and talks during the last 5 years, true to the motto “hindsight is easier than foresight”. We will talk about technologies from VMware to AWS and Kubernetes and from Netflix Hystrix to Spring Boot. We will discuss implicit assumptions, generic mistakes, incidents and what “you build it, you run it” has in common with eye bags. I also will introduce a microservice taxonomy that has been developed over the course of time. Furthermore, I will tell you about innovation tokens that helped me as an architect to not overwhelm people with too many innovations. We will also talk about the most important aspects of architecture, people and communication as well as my assumption that “it depends” might be _the_ right answer after all.

The more people's work shifts from tactile to digital work, the more important becomes the design of surfaces for the interaction with the digital. In different application areas, we have struggled with two questions over the past few years: How can work be transferred into an interaction model in which our present work processes can be recognized, but in which helpful new digital features make a real improvement? How can the design process for such interaction models be integrated into agile software development so that a well-usable system with a permanently flexible and adaptable software architecture originates? We have found answers at various levels that we would like to introduce to the audience in this talk.

Dr. Carola Lilienthal is the Managing Director of WPS - Workplace Solutions GmbH. Since 2003, she analyzes regularly the future viability of software architectures on behalf of her clients and speaks at conferences about this topic. In 2015, she summarized her experience from more than one hundred analyses of 20,000 and 15 million LOC in the book "Long-Lasting Software Architectures".

Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company trainings, university coursework, practice problems, or to challenge each other. In this talk, the creator of Advent of Code will give a behind-the-scenes look at what it takes to run a month-long programming event for over 200,000 people.

Eric Wastl is the creator of Advent of Code, an Advent calendar of small programming puzzles. He's a software engineer with over 15 years of professional experience in software engineering, software architecture, web development, security, system administration, math, developer education, and mentoring.

The Architect Elevator: Connecting Penthouse and Engine Room (in English)

Gregor Hohpe, Technical Director of the CTO Office, Google

Many large enterprises are feeling pressure: digital disruptors attack with brand-new business models and no legacy; the “FaceBook generation” has dramatically increased user expectations; and access to state-of-the-art technologies has been democratized by cloud providers. This is tough stuff for enterprises that have been, and still are, very successful, but are built around traditional technology and organizational structures. “Turning the tanker”, as the need to transform is often described, has become a board room-level topic in many traditional enterprises. Chief IT Architects and CTOs play a key role in such a digital transformation endeavor. They combine the technical, communication, and organizational skills to create business value from a tech stack refresh, to look behind buzzwords like “agile” and “DevOps”, and to build a technology platform that assures quality while moving faster. They do so by riding the “Architect Elevator” from the penthouse, where the business strategy is set, to the engine room, where the enabling technology is implemented. I rode that elevator for 5 years in a major financial services organization and am now advising major corporations on their digital journey. I collect stories from the daily life of IT transformation and package them in lighthearted, but meaningful anecdotes.

Many large enterprises are feeling pressure: digital disruptors attack with brand-new business models and no legacy; the “FaceBook generation” has dramatically increased user expectations; and access to state-of-the-art technologies has been democratized by cloud providers. This is tough stuff for enterprises that have been, and still are, very successful, but are built around traditional technology and organizational structures. “Turning the tanker”, as the need to transform is often described, has become a board room-level topic in many traditional enterprises. Chief IT Architects and CTOs play a key role in such a digital transformation endeavor. They combine the technical, communication, and organizational skills to create business value from a tech stack refresh, to look behind buzzwords like “agile” and “DevOps”, and to build a technology platform that assures quality while moving faster. They do so by riding the “Architect Elevator” from the penthouse, where the business strategy is set, to the engine room, where the enabling technology is implemented. I rode that elevator for 5 years in a major financial services organization and am now advising major corporations on their digital journey. I collect stories from the daily life of IT transformation and package them in lighthearted, but meaningful anecdotes.

Where is my Package?

Dr. Jan Deiterding, IT Architect, Deutsche Post

It used to be easier: the godmother called the turntable phone and announced that she had carried the package for the children to the post office. Usually, a few days later, they found a small yellow card in their mailbox noting that they could pick up the package at the branch office from 10:00 am on the next working day. Today, the online shop sends a push message to your smartphone that your package will be delivered tomorrow between 12:30 and 12:37 pm. If you are not at home then you can spontaneously redirect it to the next packet box. You can see online where the package is right now and if you want to return your purchase, simply print out the parcel label at home. Until the beginning of 2018, there were many different systems for this within DHL. All of them had their own websites, looked differently and offered diverging services. To make things worse, all systems were developed by different teams spread across Germany. Our goal was to realize a single website for DHL customers in which all services are brought together in a uniform presentation. At the same time no central monolith should be created, but instead the autonomy of the individual systems was to be preserved. In this talk, this endeavor will be considered from the point of view of a software architect. Based on three examples, I will show how an architect spends his day: coordinating developers, developing technical concepts, and ensuring that the hardware plays along.

Jan Deiterding works as a software architect at DHL. In this function, he is responsible for the expansion and development of the websites and APIs for private and business customers in Germany and Europe. He interacts with donors, departments, developers, and operations and tries to preserve the noble ideals of clean software design. Before that, he wrote many thousands of lines of Java code in IT consulting and a doctoral thesis on self-adaptive industrial robots.

Computer & Games

The history of women in computing has largely been lost, like the histories of factory workers who built the first cars. Yet, women invented programming, were the original developers for the ENIAC, created assembly language and developed the first compiler (not to mention the term “compiler” and “bug”), and were instrumental to the development of many seminal programming languages. So what happened? It’s a drama that’s equal parts cultural excavation and celebration. In this talk, Brenda Romero digs up this fascinating history, explores what happened, and looks at how the artifacts of this legacy still affect computing and its growth today.

Brenda Romero is a BAFTA award-winning game designer, artist, and Fulbright award recipient who entered the video game industry in 1981. As a designer, she has worked on 47 games and contributed to many seminal titles, including the Wizardry and Jagged Alliance series and titles in the Ghost Recon, Dungeons & Dragons, and Def Jam franchises. Away from the machine, her analog series of six games, The Mechanic is the Message, has drawn national and international acclaim, particularly Train and Siochán Leat, a game about her family’s history, which is presently housed in the National Museum of Play. Most recently, in 2018, she received a Lifetime Achievement Award (the Bizkaia award) at the Fun and Serious Games Festival in Bilbao, Spain, and the inaugural Grace Hopper Award presented by Science Foundation Ireland at the Women in Tech conference in Dublin, Ireland. In 2017, she received the 2017 Development Legend award at the Develop: Brighton. That same year, she won a BAFTA Special Award for her contributions to the industry. In 2015, she won the coveted Ambassador’s Award at the Game Developers Choice Awards. In 2014, she received a Fulbright award to study Ireland’s game industry, academic and government policies.

Chess and mathematics belong to the intellectual world cultural heritage. Since its creation, chess has been played throughout the world in almost all cultures. Mathematics is a human resource that has grown over thousands of years. It is – often unnoticed – in many things that surround us. The heating heats, the plane flies only when math is involved. Chess and mathematics are both sources of sustained perceptible beauty. Here and there the aesthetics are in the radiance of cleverly linked ideas. The talk shows highlights of these two worlds of ideas as well as the manifold relations between chess and mathematics.

Christian Hesse, born in 1960, was enrolled in school 1966 in Neu-Listernohl, a 1500-souls town of Sauerland, 21 years later he earned his PhD at the Harvard University (USA). From 1987-91 he taught as Assistant Professor at the University of California at Berkeley. In 1991 he was appointed Professor of Mathematics at the University of Stuttgart. Among his passions is the chess game. He has written two books about it, including the essay volume “Expeditions to the Chess World”. Together with the Klitschko brothers, football coach Felix Magath and former World Champion Anatoly Karpov, he was named International Ambassador of the 2008 Chess Olympiad. Christian Hesse is married and has an 18-year-old daughter and a 14-year-old son. He lives in Mannheim with his family.

The Programming Principles of Id Software (in English)

John Romero, Games Developer and Co-Founder, id Software

The Early Days of Id Software: As co-founders of id Software, John Romero and John Carmack created the code behind the company's seminal titles. The principles they defined through experience in id’s earliest days built upon one another to produce a unique methodology and a constantly shippable codebase. In this talk, John Romero discusses id software’s early days, these programming principles and the events and games that led to their creation.

John Romero is an award-winning game development icon whose work spans over 130 games, 108 of which have been published commercially. Romero is the "father of first person shooters" having led the design and contributed to the programming and audio design of the iconic and genre-defining games DOOM, Quake, Heretic and Hexen. Romero is also was also one of the earliest supporters of eSports and a current competitive DOOM and Quake player. To date, Romero has co-founded eight successful game companies including id Software. He is considered to be among the world’s top game designers, and his products have won well over 100 awards. Romero most recently won a Lifetime Achievement award at the Fun & Serious Games Festival in Bilbao and the Legend Award at 2017’s Develop: Brighton. One of the earliest indie developers, Romero began working in the game space in 1979 on mainframes before moving to the Apple II in 1981. He is a completely self-taught programmer, designer and artist, having drawn his inspirations from early Apple II programmers.


One of the biggest societal issues nowadays is the ever-increasing demand for energy, dwindling fossil fuels and the drive for clean and sustainable power generation. Among the most common renewable energy technologies, solar power generation has perhaps the greatest potential due to the immense energy production of the sun. Until now silicon-based solar cells are ubiquitous, but they are far from being sufficient to cover the entire energy needs of the earth. In the search for new materials to improve or completely replace silicon solar cells, the so-called halide perowskite appeared in 2011 for the first time. Due to their extremely advantageous optical properties, perovskite-based solar cells have since been improved so far that peak values of the conversion efficiency already come close to the silicon-based solar cells. In this lecture the material of the halide perowskite is presented and its fundamental properties and the possible optoelectronic applications (e.g. solar cells, LEDs) are explained. In addition, it addresses the (still) existing problems that currently prevent the wholesale commercialization of this highly interesting material.

Alexander Urban studied Physics at the University of Karlsruhe (Germany) obtaining an equivalent to an M.Sc. degree (German: Dipl. Phys.) at the University of Karlsruhe (Germany) in 2006. During his studies he spent a year at Heriot Watt University (UK), where he obtained an M.Phys. in Optoelectronics and Lasers in 2005. He then joined the Photonics and Optoelectronics Chair of Jochen Feldmann at the Ludwig-Maximilians-University (LMU) Munich (Germany) in 2007 where he worked on the optothermal manipulation of plasmonic nanoparticles, earning his Ph.D. summa cum laude in 2010. He expanded his expertise in the fields of plasmonics and nanophotonics in the group of Naomi J. Halas at the Laboratory for Nanophotonics at Rice University (Houston, TX, USA), beginning in 2011. He returned to the LMU in 2014 to become a junior group leader with Jochen Feldmann, where he led the research thrusts on optical spectroscopy, focusing on hybrid nanomaterials such as halide perovskite nanocrystals and carbon dots. In 2017 he was awarded a prestigious Starting Grant from the European Research Council and shortly after that in 2018 he received a call as a Full Professor of Physics (W2) at the LMU.  Here, he now leads his own research group working on nanospectroscopy in novel hybrid nanomaterials.

Reflections on Missing Productivity Growth in an Era of Digital Transformation (in English)

Christina Timiliotis, Economist, OECD

Digital transformation represents an opportunity for improving productivity growth by enabling innovation and reducing the costs of a range of business processes. Yet despite the rapid advance of digital technologies, aggregate productivity growth has slowed over the past decade or so, raising the question of how digital technologies can boost productivity. Today, as in the 1980s, when Nobel-prize winner Robert Solow famously quipped: "we see computers everywhere but in the productivity statistics" there is again a paradox of rapid technological change and slow productivity growth. OECD work shows there is hope for the future. While not yet showing up in the aggregate productivity data, digital transformation is starting to have impacts on productivity in individual firms – and increasingly also in certain industries. Further and larger impacts should emerge as digital transformation evolves, especially in the wake of Artificial Intelligence, and as digital technologies, business models and practices diffuse to a greater number of firms and industries. Policy makers can help ensure that these impacts emerge by engaging in supportive policy actions, in particular for less productive firms. This would result in a double dividend in terms of productivity outcomes and inclusiveness.

Christina Timiliotis is an economist in the Department on Economics at the OECD. Her research focuses on productivity, and most notably the productivity-digitalisation nexus, but she also works for the Luxembourg country desk. Previously, Christina worked in the Directorate of Trade and Agriculture on issues related to trade and environment, specifically on fossil-fuel subsidies and trade in environmental goods and services. Christina holds a M.Sc. in Empirical and Theoretical Economics from the Paris School of Economics.

Hardware & Reality Hacking

Space applications are a crucial element in today's digital economy, enabling global communication networks, logistics monitoring or providing data for business analytics. With commercially-driven companies entering the market, the future is looking bright on a global scale. How can we harness space data and who is taking the lead in the space race 2.0?

Daniel Metzler is Co-Founder and CEO of Isar Aerospace, a Munich based company developing orbital space launch vehicles with the purpose of lowering the barriers for commercial space access. He previously led a team of 40 students at the rocketry research group WARR developing sounding rockets. Next to his studies in Mechanical and Aerospace Engineering he also developed multiple web services.

BepiColombo is a space mission to Mercury, developed by the European Space Agency ESA in collaboration with the Japan Aerospace Exploration Agency (JAXA). The mission is composed of two scientific orbiters, which were launched together on 20th October 2018 as a single composite spacecraft, including a module with electric propulsion to support the 7-years cruise phase which includes also planetary swingbys at Earth (1x), Venus (2x) and Mercury (6x). The spacecraft is operated from ESA's European Space Operations Center (ESOC) based in Darmstadt, Germany. The presentation will introduce the mission scientific objectives and specific challenges, as well as report on the main events since launch.

Elsa Montagnon comes from France. She studied aerospace engineering in France and Germany before joining the European Space Agency in 1999. She has since supported ESA's comet chaser Rosetta for the launch in 2004 and the Philae landing in 2014. Since 2007, she is the Spacecraft Operations Manager of BepiColombo, ESA and JAXA's mission to Mercury.

Hype Meets Reality: Additive Manufacting in Series Production

Dr. Joachim Zettler, CEO, Airbus APWORKS

Joachim Zettler is the Managing Director of APWORKS, a 100 % subsidiary of Premium AEROTEC focusing on Additive Manufacturing. He became the founding CEO when APWORKS was initiated in 2013. Since the launch of the company he established the market activities in various industries such as automotive, robotics, mechanical engineering, medical technology and aerospace. Having a strong background in production technology for small- to large scale aerospace and automotive components, Joachim Zettler has been working since 2005 at the Airbus Group as a Project Manager and Technical Consultant. During his time at Airbus he mainly worked for the civil aircraft business of Airbus in France on manufacturing process optimization and supported the introduction of lean manufacturing methods. He holds a mechanical engineering degree from the Technical University of Munich and did specialize in production methods.

A Real-Life Mission Impossible (in English)

Josh Dean, Magazine Writer and Author

In 1968, a Soviet nuclear ballistic missile submarine went missing in the remote Pacific and the Russians declared it lost. Under the veil of secrecy, the Americans found the wreck, and over the next 6 years, the CIA designed and executed perhaps the largest and most complicated covert spy mission in history. This mission, code-named Project Azorian, was beyond audacious. Engineers set out to build a ship and system that could pull a 2 million pound object off the floor of the sea, 16,500 feet under the surface, without anyone knowing its true purpose. How is that possible? Because the CIA had the perfect cover story. The ship, they told the world, was an ocean mining vessel with a very eccentric owner: Howard Hughes.

Josh Dean is a journalist and author who writes frequently for many US magazines, including Popular Science, GQ, and Bloomberg Businessweek on a wide variety of subjects. His latest book, The Taking of K-129, tells the incredible true story of Project Azorian, the largest covert operation in CIA history, and arguably the greatest-ever feat of naval engineering. His next book, The Impossible Factory, will tell the story of aerospace legend Kelly Johnson and his remarkable Lockheed Skunk Works, birthplace SR-71 Blackbird. Additionally, he is the host and co-creator of the true crime podcast, The Clearing, coming this summer.

Disney Research was launched in 2008 as an informal network of research laboratories that collaborate closely with academic institutions such as the Swiss Federal Institute of Technology in Zurich and Carnegie Mellon University. Its mission is to push the frontiers of  technology in areas relevant to Disney's creative entertainment businesses. Disney Research develops innovations for Parks, Film, Animation, Television, Games, and Consumer Products. Research areas include video and animation technologies, postproduction and special effects, digital fabrication, robotics, and much more. This event gives an overview of Disney Research spiced with some examples of our latest and greatest inventions. We will focus onto the collaboration between ETH Zurich and the Walt Disney Company and display the synergies arising from this program. The two presentations will highlight a company perspective as well as a view from the academic angle and they will be followed by a panel discussion.

Presenters: Markus Gross, director of Disney Research Zurich, Professor of Computer Science, ETH Zurich, Scott Trowbridge, Vice President of Research and Development, The Walt Disney Company

Saving Carbon Dioxide by Creating an Affordable Digital Twin of a Wind Turbine for Data Driven Optimization (in English)

Robert Erdmann, CDO, fos4x

Wind turbines, sometimes called WECs for wind energy converters, are nearing a total weight of 1,000 tons and represent the result of modern mechanical engineering wizardry. However, many operators of wind farms face various types of curtailment; even right after launch, parks often deliver significantly less energy than what was expected. Operational maintenance continues to be done in fixed intervals. Towards the end of the certified lifetime, the economic viability of continuing operation and its alternatives needs to be quantified, but data is hard to come by. And this is a recurring theme in the wind industry: (non-) availability of data and the questionable quality and resolution of the data that is. The presentation will take you through a handful of case studies of how fiber-optic sensors are integrated into the rotor blades to provide high-frequency (in this context, 40 Hz) data from which we calculate various relevant physical parameters (wind direction, speed, sheer (vertical and horizontal), turbulence, etc.), but can also assess the condition of the blade (icing, crack, lightning strike, etc.). We do all this to increase the annual energy production (AEP), cut operating expenses, and prolong lifetime. And yes, the effects are highly measurable. We have been at this since 2012 and the technology is protected by more than 100 patents. Looking forward, our beliefs are that owners of assets will use data driven optimization and that the turbines themselves will be online and interacting. Assets will go from underused to understood, and we will do our share to make this a reality faster.

Robert Erdmann leads the digital business at fos4X, encompassing IoT infrastructure, software development, and big data analytics. With his team, he creates model-based analysis and machine learning applications leveraging data from equally innovative sensor technology in the rotor blades. Before joining fos4X, Robert built various digital businesses at telecommunications giant Telefónica. Most recently, he launched the Advanced Data Analytics unit within Telefónica NEXT, a strategic spin-off focusing on mobility data. He started his career at semiconductor manufacturer Texas Instruments, holds master's degrees in electrical engineering (Stanford University) and business administration (UCLA Anderson School of Management), and has a strong international profile with experience in the US, China, Scandinavia, and France.

Blobby VR - Developing a Multiplayer Game of Volleyball in VR

Thomas Endres and Christoph BergemannTNG

How can we realize a multiplayer game of volleyball in VR? Is that even possible? Which pitfalls will we encounter on the way? And will the game be fun? Today we can give answers to these questions that drove us a year ago. During this talk, we will report on the challenges that we had to overcome on our way to a completed game. Due to the advent and easy access to professional gaming engines like Unity 3D, Unreal and Cry Engine in recent years, programming games has become much easier. Nevertheless, the specific challenges during the implementation of individual games remain. We will especially focus on the difficulties of dealing with network latency in conjunction with the physics engine. Also, we will show how we worked iteratively on the mechanics of the game and the physics of the ball in order to achieve not only a seemingly realistic but - most importantly - entertaining game behaviour.

Christoph Bergemann is a consultant at TNG Technology Consulting GmbH. After studying mathematics at LMU Munich, he worked as a research associate at the German Aerospace Center on remote sensing of the atmosphere. He is a member of the TNG Hardware Hacking Team working on the development of various prototypes.

In his role as an Associate Partner for TNG Technology Consulting in Munich, Thomas Endres works as an IT consultant. Besides his normal work for the company and the customers he is creating various prototypes - like a telepresence robotics system with which you can see reality through the eyes of a robot, or an Augmented Reality AI that shows the world from the perspective of an artist. He is working on various applications in the fields of AR/VR, AI and gesture control, putting them to use e.g. in autonomous or gesture controlled drones. But he is also involved in other open source projects written in Java, C# and all kinds of JavaScript languages. Thomas studied IT at the TU Munich and is passionate about software development and all the other aspects of technology. As an Intel Software Innovator and Black Belt, he is promoting new technologies like AI, AR/VR and robotics around the world. For this he received amongst others a JavaOne Rockstar award.

Quantum Computing

D-Wave's Approach to Quantum Computing (in English)

Dr. Colin P. Williams, Vice President of Strategy & Corporate Development, D-Wave Systems, Inc.


Quantum computing promises to revolutionize computer technology as profoundly as the airplane revolutionized transportation. After decades of incubation, early generation quantum computers are finally appearing that allow people to begin experimentation in earnest. In this talk, I will describe D-Wave's approach to quantum computing, explain its pros and cons with respect to competing schemes, and give the rationale behind our design choices. Furthermore, I will give examples of how the native optimization and sampling capabilities of our quantum processor can be exploited to tackle problems in a variety of fields including healthcare, physics, finance, simulation, artificial intelligence, and machine learning.

Colin P. Williams is Vice President Strategy & Corporate Development at D-Wave Systems Inc., reporting directly to the CEO. He has spent over 20 years in quantum computing and has developed and patented algorithms and applications for both gate model and annealing model approaches. Prior to joining D-Wave, Colin was a Senior Research Scientist (SRS) and Program Manager for Advanced Computing Paradigms at the NASA Jet Propulsion Laboratory, California Institute of Technology. Earlier, as an acting Associate Professor of Computer Science at Stanford University, he devised, developed, and taught Stanford's first courses on quantum computing & quantum communications, and computer-based mathematics. Colin earned his Ph.D. in artificial intelligence from the University of Edinburgh in 1989 and wrote “Explorations in Quantum Computing,” one of the first textbooks in the field.

Tools and Methods

Do I Really Have to Test Everything Again? Test Impact Analysis in Research and Practice

Dr. Elmar Juergens, Founder and Consultant for Software Quality, CQSE GmbH, and Alexander Kaserbacher, TNG

Big test suites often have a long run-time. Therefore, in practice, they are often not performed as part of Continuous Integration (CI), but only in later test phases. Unfortunately, many errors remain unrecognized during the CI and are located too late, causing high costs. Test Impact Analysis allows you to run only those tests that have been affected by the code changes since the last test run and are, therefore, most likely to contain new bugs. In our empirical samplings, we were able to find 90% of the errors in 2% of the test execution time. The talk presents basics, research results, and empirical samplings on test impact analysis. In addition, we report on our experiences in our own development process, in open-source projects and with customers.

Dr. Elmar Juergens did his PhD on static code analysis and received the software engineering prize of the Ernst Denert Foundation for his doctoral thesis. He is co-founder of CQSE GmbH and has been supporting teams to improve their quality assurance and testing processes for ten years. Elmar speaks regularly at national and international conferences and in 2015 was appointed Junior Fellow of the Gesellschaft für Informatik.

Alexander Kaserbacher is a software consultant at TNG Technology Consulting, where he is currently working with solution architecture for a client of the telecommunications industry. During his computer science studies he dealt intensively with software engineering and quality. His passion for technology also includes the diverse effects of software on business and societal factors.

The Nerdy Salesman: Why Technical People must Start Shaping their Businesses and are Best Equipped to do so (in English)

Johannes Lechner, Co-Founder and Head of Product, Payworks GmbH

Most people agree on how important an interdisciplinary and entrepreneurial mindset is to succeed in Information Technology. However, the sad reality in many companies and educational programs is still a deep divide between “business” and “technical” people and matters. My talk will cover why this needs to change and why I truly believe that “technical” people are best equipped to add tremendous value to their companies. I will share practical advice on how students, young professionals, and their managers can get there. 

Johannes Lechner is responsible for all things product at Munich-based payment technology provider Payworks. He completed degrees in Computer Science and Technology Management while being heads down in entrepreneurial endeavours. In his spare time he is an obsessive photographer and aspiring voluntary firefighter.

No Docker Required: Tools to Build Container Images

Dr. Martin Höfling and Patrick HarböckTNG

While Docker is still most popular for building and running containers, it has security and scalability shortcomings for production systems and build pipelines. Recently alternatives have emerged to build container images without Docker. Each of these address common problems: building without elevated privileges, reproducible results, caching of intermediate layers and scaling CI/CD in larger organizations. We first introduce the basic structure of a container image and compare the build process for a selection of these tools. We further demonstrate their usage and discuss strengths and weaknesses of each. Finally, we give guidance for selecting the right tool – which might not always be Docker.

Martin is a Principal Consultant at TNG Technology Consulting and focuses on cloud native technology and architecture of distributed systems. He is currently leading a team that builds interweaved web applications based on cloud-native technology for a larger organization.

Patrick is a Senior Consultant at TNG Technology Consulting. As a Site Reliability Engineer he is currently building and operating distributed web systems with Kubernetes on AWS for a startup company. He uses and contributes to open-source software where possible and likes to work with Python, TypeScript, Go and modern web technologies.

Once upon a time, we used software that ran on our own computers, that worked offline, and that stored its data in files on the local disk. Then we decided to put it all in the cloud. We gained some great features: real-time collaboration, like in Google Docs, for example. But we also lost control of our own data, and became dependent on far-away servers to allow us to access the data that we created. Automerge is part of an effort to get the best of both worlds. It is a JavaScript library for building real-time collaborative applications. However, apps built with Automerge also work offline, storing data locally, and synchronise their data with collaborators whenever a network is available. And although you can use it with servers, you don’t have to: synchronisation also works peer-to-peer, or via any network you choose. In this talk we will explore how Automerge deals with different users independently modifying shared data in a collaborative application (hint: by merging the changes… automatically!), how it achieves consistency in highly distributed settings, and where it is heading in the future.

Martin Kleppmann is a distributed systems researcher at the University of Cambridge, and author of the acclaimed O’Reilly book “Designing Data-Intensive Applications”. Previously he was a software engineer and entrepreneur, co-founding and selling two startups, and working on large-scale data infrastructure at LinkedIn.

Site Reliability Engineering – DevOps on Steroids

Maximilian BodeTNG

Everyone is talking about DevOps - but how do you do it? Google has published its internal best practices under the name Site Reliability Engineering. We took an intensive look at this subject. In this lecture, we present the key concepts and show applications from our everyday project life. If you've always wanted to know what's behind the terms Infrastructure as Code, Error Budgets, or Service Level Objectives, then this is the right place for you. Technology should also not be neglected - the listener learns which tools and frameworks can support a reliable and secure cloud operation. Cloud-native technologies such as Kubernetes and Prometheus play a central role, as do CI / CD tools such as GitLab or Terraform.

Maximilian Bode is Senior Consultant at TNG with focus on big data engineering. Currently, he supports a large customer from the telecommunications industry in the design, development, and operation of an anonymization platform for transaction data from the mobile network. Until recently, he headed the SRE team there, which executed the migration of the platform to the cloud. Max likes well-functioning teams, lean processes, and open-source software like Apache Flink or Kubernetes.

A new bug was found in a well-known virtualization software, for which there was no, at least not publicly known, working exploit. The purpose of this example is to demonstrate how a small bug in a software can override the overall security of an application and the underlying operating system. Also we will see how a real exploit is developed and implemented. In addition, the assumption "virtualization equals sandbox" will be contended.

Currently I am a Research Associate and Ph.D. Student at the Fraunhofer Institute for Applied and Integrated Security with a focus on Product Protection & Industrial Security. Previously, I worked as an IT security analyst at the City of Munich in the field of external networks & DMZ. Additionally, I run a business that offers practice-related security training for developers and have a M.Sc. in Applied Research of Engineering Sciences focusing on Automotive Security. Most recently, in 2018, I became European Champion with the German team at the European Cyber Security Challenge in London, UK.