Speakers
No unicorns, no caticorns, just software development

Alex Soto
@alexsotobSoftware Engineer
Istio Service Mesh & pragmatic microservices architecture
Alex Soto
Software Engineer at Red Hat
Alex is a Software Engineer at Red Hat in Developers group. He is a passionate about Java world, software automation and he believes in the open source software model.
Alex is the creator of NoSQLUnit project, member of JSR374 (Java API for JSON Processing) Expert Group, the co-author of Testing Java Microservices book for Manning and contributor of several open source projects. A Java Champion since 2017 and international speaker, he has talked about new testing techniques for microservices, continuous delivery in the 21st century.
Istio Service Mesh & pragmatic microservices architecture
We have been celebrating 2018 as the Year of the Service Mesh, where an open source effort known as Istio has taken and changed how we design and release our applications.
As we start to go toward cloud-native infrastructure and build our applications out of microservices, we must fully face the drawbacks and challenges to doing so. Some of these challenges include how to consistently monitor and collect statistics, tracing, and another telemetry, how to add resiliency in the face of unexpected failure, how to do powerful feature routing and much more.
Istio and service mesh in general help developers solve this in a non-invasive way.
In this session, we’ll show how you can take advantage of these capabilities in an incremental way. We expect most developers haven’t adequately solved for these issues, so we’ll take it to step by step and build up a strong understanding of Istio, how to get quick wins, and harness its power in your production services architecture.

Armando González
CEO
Big Data is the New Currency
Armando González
CEO at Ravenpack
Armando Gonzalez is President & CEO of RavenPack, the leading provider of big data analytics for financial institutions. Armando is an expert in applied big data and artificial intelligence technologies. He has designed systems that turn unstructured content into structured data, primarily for financial trading applications. Armando is widely regarded as one of the most knowledgeable authorities on automated text and sentiment analysis.
His commentary and research have appeared in leading business publications such as the Wall Street Journal, Financial Times, among many others. Armando holds degrees in Economics and International Business Administration from the American University in Paris and is a recognized speaker at academic and business conferences across the globe.
Big Data is the New Currency
The way data is collected, anonymized and monetized is large without the owner’s permission and this is about to be disrupted - providing many benefits to hedge fund data buyers. This presentation provides a pathway for the individual to control and share in the value their data creates, and for data users to gain access to richer more specific data sets.

Ashutosh Raina
@ashutoshrainaSite Reliability Engineer
Madaari : Ordering For The Monkeys
Ashutosh Raina
Site Reliability Engineer at eBay
Ashutosh is a member of the Site Reliability team at eBay focussed on bringing LDFI to the enterprise. He works at the intersection of academia and industry, trying his best to fuse them together. Previously, Ashutosh was a graduate student at University Of California Santa Cruz working at Disorderly Labs with Dr. Peter Alvaro making distributed systems safer using Lineage Driven Fault Injection.
Madaari : Ordering For The Monkeys
Instead of randomly injecting faults ( i.e. Chaos Monkey), what if we could order our experiments to perform min number of experiments for maximum yield? We present a solution(& results) to the problem of experiment selection using Lineage Driven Fault Injection to reduce the search space of faults.
Lineage Driven Fault Injection (LDFI) is a state of the art technique in chaos engineering experiment selection. LDFI since its inception has used an SAT solver under the hood which presents solutions to the decision problem (which faults to inject) in no particular order. As SRE’s we would like to perform experiments that reveal the bugs that the customers are most likely to hit first. In this talk, we present new improvements to LDFI that orders the experiment suggestions.
In the first the half of the talk we will show LDFI is a technique that can be widely used within an enterprise. We present the motivation for ordering the chaos experiments along with some prioritization we utilized while conducting the experiments. We also highlight how ordering is a general purpose technique that we can use to encode the peculiarities of a heterogeneous microservices architecture. LDFI can work in an enterprise by harnessing the observability infrastructure to model the redundancy of the system.
Next, we present experiments conducted within our organization using ordered LDFI and some preliminary results. We show examples of services where we discovered bugs, and how carefully controlling the order of experiments allowed LDFI to avoid running unnecessary experiments. We also present an example of an application where we declared the service shippable under crash stop model. We also present a comparison with Chaos Monkey and show how LDFI found the known bugs in a given application using orders of magnitude fewer experiments than a random fault injection tool like Chaos Monkey.
Finally, we discuss how we plan to take LDFI forward. We discuss open problems and possible solutions for scalarizing probabilities of failure, latency injection, integration with service mesh technologies like envoy for fine-grained fault injection, fault injection for stateful systems.
Key takeaways: 1) Understand how LDFI can be integrated in the enterprise by harnessing the observability infrastructure. 2) Limitations of LDFI w.r.t unordered solutions and why ordering matters for chaos engineering experiments. 3) Preliminary results of prioritized LDFI and a future direction for the community.

Barry S. Stahl
@bsstahl.Net Software Engineer
Pushing AI to the Client with WebAssembly and Blazor
Barry S. Stahl
.Net Software Engineer
Barry (he/his) is a .NET Software Engineer who has been creating business solutions for enterprise customers for more than 30 years. Barry is also an Election Integrity Activist, baseball and hockey fan, husband of one genius and father of another, and a 30 year resident of Phoenix Arizona USA. When Barry is not traveling around the world to speak at Conferences, Code Camps and User Groups or to participate in GiveCamp events, he spends his days building intelligent distributed systems for enterprise customers and his nights thinking about the next [AZGiveCamp](http://azgivecamp.org), an annual event where software developers come together to build websites and apps for some great non-profit organizations.
You can follow Barry on [Twitter](http://twitter.com/bsstahl) or read his blog [Cognitive Inheritance](http://www.cognitiveinheritance.com).
Pushing AI to the Client with WebAssembly and Blazor
Want to run your AI algorithms directly in the browser on the client-side? Now you can with WebAssembly and Blazor. Join us as we write code directly in WebAssembly. Then, we’ll look at Blazor and how you can use it, along with WebAssembly to run your tooling client side in the browser.
Want to run your AI algorithms directly in the browser on the client-side without the need for transpilers or browser plug-ins? Well, now you can with WebAssembly and Blazor. WebAssembly (WASM) is the W3C specification that will be used to provide the next generation of development tools for the web and beyond. Blazor is Microsoft’s experiment that allows ASP.Net developers to create web pages that do much of the scripting work in C# using WASM. Come join us as we learn to write code directly in WebAssembly’s human-readable format. Then, we’ll look at the current state of Blazor and how you can use it, along with WebAssembly to run your tooling client side in the browser.

David G. Simmons
@davidgsIoTSenior IoT Evangelist
Pushing it to the edge in IoT
David G. Simmons
Senior IoT Evangelist at InfluxData
David Simmons is the Senior IoT Developer Evangelist at InfluxData, helping developers around the globe manage the streams of data that their devices produce. He’s been passionate about IoT for nearly 15 years and helped to develop the very first IoT Developer Platform before “IoT” was even ‘a thing.’ He’s always had a thing about pushing the edge and seeing what happens. David has held numerous technical evangelist roles at companies such as DragonflyIoT, Riverbed Technologies, and Sun Microsystems.
Pushing it to the edge in IoT
Where is the edge in IoT and how much can you do there? Data collection? Analytics? I’ll show you how to build and deploy an embedded IoT edge platform that can do data collection, analytics, dashboarding and much more. All using Open Source.
As IoT deployments move forward, the need to collect, analyze, and respond to data further out on the edge becomes a critical factor in the success – or failure – of any IoT project. Network bandwidth costs may be dropping, and storage is cheaper than ever, but at IoT scale, these costs can still quickly overrun a project’s budget and ultimately doom it to failure.
The more you centralize your data collection and storage, the higher these costs become. Edge data collection and analysis can dramatically lower these costs, plus decrease the time to react to critical sensor data. With most data platforms, it simply isn’t practical, or even possible, to push collection AND analytics to the edge. In this talk I’ll show how I’ve done exactly this with a combination of open source hardware – Pine64 – and open source software – InfluxDB – to build a practical, efficient and scalable data collection and analysis gateway device for IoT deployments. The edge is where the data is, so the edge is where the data collection and analytics needs to be.

Dawid Furman
@dfurmansJava & Scala Developer
Functional programming seems super cool but ...
Dawid Furman
Java & Scala Developer at Rindus
Dawid is an enthusiast of the functional paradigm, computers, and human languages. He is a guitar artist of SolYNaranjaS - the multinational music project from Málaga city, Costa del Sol. He is also co-organisers of Málaga Scala Developers community. A traveler, motorcycle driver, speaker.
Functional programming seems super cool but ...
A good understanding of different paradigms is not an easy issue - it is like changing the culture chip and even more about some kind of mind-shift.
Once we got it, your programming toolbox in terms of abstraction will be much richer than ever before.
Let's see how we can slightly move into the functional paradigm world:
- how we can express all well-known abstractions from OOP
- classes and their relations and communication, polymorphism and some important design patterns as well
Let's compare and combine the best things from these two paradigms. Let's do it together and become better programmers!

David Reche Martínez
@drechemaSales Engineer
API design first: from API inception to API implementation in less you say “Benalmadena"
David Reche Martínez
Sales Engineer at InterSystems
David is Sales Account Manager at InterSystems, a leading company in technology for the development of health information systems. He holds a Computer Science Degree and a PhD in Software Engineering and AI both from the University of Malaga. He previously worked for ISOFT in Spain and Latin America. He has long experience in health informatics projects, especially in integration projects, electronic medical record sharing and healthcare Big Data, and he regularly participates in standardization organizations such as HL7 and the standardization committee AEN / CTN 139.
API design first: from API inception to API implementation in less you say “Benalmadena"
OpenAPI initiative (https://www.openapis.org/) is the organization supporting a standard specification to define APIs (https://github.com/OAI/OpenAPI-Specification). The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface description for REST APIs, which allows both humans and computers to discover and understand the capabilities of service without requiring access to source code, additional documentation, or inspection of network traffic. When properly defined via OpenAPI, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interface descriptions have done for lower-level programming, the OpenAPI Specification removes guesswork in calling a service.
InterSystems introduces in InterSystems IRIS support for an API-design first approach, this approach allows to design your specification first and then generate server-side from it. If we design the API first, normally we use Swagger Editor or another similar tool to create the specification and get the OAS specification in JSON format whenever we want. In this presentation, we are going to design and implement an API from scratch to live in a very short time.

Emilio Camacho Rivas
@cerverosSoftware Engineer
Why being interrupted is an essential part of your career
Emilio Camacho Rivas
Software Engineer at GIG
Emilio has over 12 years of experience as a backend developer working with several companies. He is now a Software Engineer with GiG, mainly focusing on gamification. Emilio enjoys all programming/language paradigms with all their pros and cons. In his free time, Emilio enjoys watching TV Series and League of Legends.
Why being interrupted is an essential part of your career
During this talk, we will briefly cover why being interrupted is not as bad as we sometimes think and how to approach this situation at work and take advantage of it.

Ewan Slater
@ewanslaterArchitect
Free the Functions with Fn project!
Ewan Slater
Architect at Oracle
Ewan started out as a research scientist and then drifted into IT. These days he is an architect in Oracle’s EMEA Technology Cloud Team, has over twenty years experience in the technology industry and a lot less hair. He joined Oracle when they acquired Thor Technology in 2005. He intended to stay for six months and he's still there.
He is currently focused on helping Oracle’s customers and partners adopt a cloud-native approach to development.
Outside of work, Ewan is an active member of the Norwich Ruby User Group (NRUG) and Digital East Anglia. He contributes to a number of open source projects and is one of the organisers of the DevEast conference in the UK.
Free the Functions with Fn project!
“Serverless” is the hottest ticket in town right now.
But many serverless platforms restrict your choice of language and / or dictate where your code runs.
In this talk, I’ll describe how we can go to the serverless ball with open source and the Fn project in particular.
“Serverless” aims to improve developer productivity by abstracting, underlying infrastructure layers. The servers are still there, but you just can’t see them.
This abstraction allows the developer to focus solely on the functions that deliver value to the business and not on the plumbing.
The economics of serverless are also interesting since you only consume resources when your functions run, rather than having applications running continually waiting to server requests.
Sadly some leading serverless platforms are not open and restrict choice in terms of: - language - deployment
In this talk, I want to show how you can do serverless development with your choice of language, and deployment location.
Presentation Summary
- The evolution of “serverless”
- Functions as a Service (FaaS)
- open source serverless frameworks
- The Fn project (see http://fnproject.io)
- Fn functions
- building
- managing state
- logging
- FDKs (Function Development Kits)
- How we can link individual functions together to create serverless applications
- Building an example serverless application with Fn.
- Fn functions

Florence Hudson
@Flo4PrincetonCo-Founder of the IEEE-Industry Standards
The TIPPSS Imperative for IoT - Ensuring Trust, Identity, Privacy, Protection, Safety and Security
Florence Hudson
Co-Founder of the IEEE-Industry Standards
Florence Hudson is a proven leader and C-level executive in technology, business, research and academia. She has led multi-billion-dollar business strategies and execution. She has been an independent director on for-profit and not-for-profit boards. Florence Hudson is a former IBM Vice President and Chief Technology Officer, and Internet2 Senior Vice President and Chief Innovation Officer. She is currently Special Advisor for Next Generation Internet at the Northeast Big Data Innovation Hub at Columbia University enabling US-EU collaboration for the European Commission's Horizon 2020 initiatives, and is Special Advisor for the NSF Cybersecurity Center of Excellence at Indiana University leading cybersecurity research transition to practice efforts. She is Founder & CEO of FDHint, LLC, specializing in advanced technology and diversity & inclusion consulting, including blockchain, artificial intelligence, big data, connected healthcare, Internet of Things, and Smart Cities. She serves on the Editorial Board for the open peer-reviewed journal "Blockchain in Healthcare Today", and is Co-Founder of the IEEE-Industry Standards and Technology Organization "Blockchain in Healthcare Global". She has a BSE in Mechanical and Aerospace Engineering from Princeton University, beginning her career at Grumman and NASA, and has attended executive education at Harvard Business School and Columbia University.
The TIPPSS Imperative for IoT - Ensuring Trust, Identity, Privacy, Protection, Safety and Security
Our increasingly connected world leveraging the Internet of Things (IoT) creates great value, in connected healthcare, smart cities, and more. The increasing use of IoT also creates great risk. We will discuss the challenges and risks we need to address as developers in TIPPSS - Trust, Identity, Privacy, Protection, Safety, and Security - for devices, systems and solutions we deliver and use. Florence leads IEEE workstreams on clinical IoT and data interoperability with blockchain addressing TIPPSS issues. She is an author of IEEE articles on "Enabling Trust and Security - TIPPSS for IoT" and "Wearables and Medical Interoperability - the Evolving Frontier", "TIPPSS for Smart Cities" in the 2017 book "Creating, Analysing and Sustaining Smarter Cities: A Systems Perspective" , and Editor in Chief for an upcoming book on "Women Securing the Future with TIPPSS for IoT."

Floris Sluiter
Independent IT specialist
Massively scalable ETL in real world applications: the hard way
Floris Sluiter
Independent IT specialist
Floris Sluiter is an independent IT specialist with expertise in Big Data and High-Performance Computing. He likes to be at the crossover boundary between infrastructure and applications. Over the years, he managed, implemented and optimized many large-scale data analytics applications, both on cloud platforms and hard iron. At first, he worked for research institutions, and later he turned to freelancing for commercial large and small companies.
Massively scalable ETL in real world applications: the hard way
Big Data examples always give the correct answers. However, in the real world, Big Data might be corrupt, contradictory or consist of so many small files it becomes extremely hard to keep track - let alone scale. A solid architecture will help to overcome many of the difficulties.
Floris will talk about a real-world implementation of a massively scalable ETL architecture. Two years ago, at the time of the implementation, Airflow just became part of Apache and still left many features to be desired for. However, requirements from the start were thousands of ETL tasks per day on average, but on occasion, this could become hundreds of thousands. The script-based method that was in place was already not capable to meet the requirements on a day to day basis and needed to be replaced as soon as possible. So this custom framework was rolled out in just 8 weeks of development time.
Keywords: Architecture, AWS, PostgreSQL, Java.

Ghida Ibrahim
Quantitative engineer/data scientist
Leveraging AI for facilitating refugee integration
Ghida Ibrahim
Quantitative engineer/data scientist at Facebook
Ghida works as a quantitative engineer/data scientist in the edge infrastructure team at Facebook London, where she builds data-driven tools and models, and perform in-depth analysis to drive the expansion and optimize the operation of one of the largest and most complex networks forming the internet. Many of the projects that she led aim at leveraging Facebook data insights to help build a more inclusive internet, through increasing internet penetration and the quality of experience witnessed by people worldwide online.
Leveraging AI for facilitating refugee integration
Today's world counts more than 20 million refugees, including over 1 million refugees resettled in Europe as a result of the ongoing Syrian civil war. When fleeing conflicts, refugees risk their lives with the hope of building a better future for themselves. However, upon resettling in a new country, refugees struggle to easily find the opportunities available to them and to filter the ones that are the most relevant to their profile and current context.
In this talk, we explain how AI can be leveraged to connect refugees in real-time and in a customized way to the opportunities that will accelerate the most their integration, bringing them a step forward towards the better future they strive to build for themselves.

Greg Young
@gregyoungCreator of CQRS
The Bizarre Mating Ritual Of The Whipnose Seadevil
Greg Young
Creator of CQRS
Gregory Young coined the term “CQRS” (Command Query Responsibility Segregation) and it was instantly picked up by the community who have elaborated upon it ever since. Greg is an independent consultant and serial entrepreneur. He has 15+ years of varied experience in computer science from embedded operating systems to business systems and he brings a pragmatic and often times unusual viewpoint to discussions. He’s a frequent contributor to InfoQ, speaker/trainer at Skills Matter and also a well-known speaker at international conferences. Greg also writes about CQRS, DDD and other hot topics on codebetter.com.
The Bizarre Mating Ritual Of The Whipnose Seadevil
If you're an angler fish, you have it rough. You spend your life in the deep sea. It's lonely. Mates are hard to find. What do you do? If you're the male Whipnose Seadevil, you spend your life exclusively in search of that elusive, life long companion. You take this task so seriously that you forgo physical development and accept a stunted life -- that is until you fix yourself to a female, and release an enzyme that digests the skin of your mouth and her body, fusing you and your new-found love down to the blood-vessel level. And so you
become dependent on her for survival, receiving nutrients via your newly formed shared circulatory system. In return, you provide valued sperm.
And therein lies the secret to building great software.
In this talk, Greg Young will make the case that polyandry and parasitic reproductive processes should serve as the model for programming. You'l learn how the Whipnose Seadevil adapts pragmatically to its deep sea environment and manages to accomplish what most of us as programmers only dream of: reduced metabolic costs in resource-poor environments and improved lifetime fitness relative to free-living competitors.
Don't miss this opportunity to learn from one of software's great visionaries!

Helena Edelson
@helenaedelsonPrincipal Engineer
Toward Predictability and Stability At The Edge Of Chaos
Helena Edelson
Principal Engineer at Lightbend
Helena did her academic work in scientific research before getting in software engineering. Formerly at Apple working on platform infrastructure for distributed data/analytics/ml (aaS) at massive scale, VP of Product Engineering at Tuplejump building a multi-tenant stream analysis machine learning platform, Senior Cloud Engineer at CrowdStrike working on cloud-based realtime Cyber Security threat analysis, and Senior Cloud Engineer at VMware automating cloud infrastructure for massive scale. She is a keynote speaker, and has given conference talks at Kafka Summit, Spark Summit, Strata, Reactive Summit, QCon SF, Scala Days, Philly Emerging Tech, and is a contributor to several open source projects like Akka and FiloDB. She is currently a Principal Engineer at Lightbend.
Toward Predictability and Stability At The Edge Of Chaos
As we edge towards larger, more complex and decoupled systems, combined with the continual growth of the global information graph, our frontiers of unsolved challenges grow equally as fast. Central challenges for distributed systems include persistence strategies across DCs, zones or regions, network partitions, data optimization, system stability in all phases.
How does leveraging CRDTs and Event Sourcing address several core distributed systems challenges? What are useful strategies and patterns involved in the design, deployment, and running of stateful and stateless applications for the cloud, for example with Kubernetes. Combined with code samples, we will see how Akka Cluster, Multi-DC Persistence, Split Brain, Sharding and Distributed Data can help solve these problems.

Ionut Balosin
@ionutbalosinSoftware Architect and Technical Trainer
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers better runtime performance?
Ionut Balosin
Software Architect and Technical Trainer
Software Architect and Technical Trainer with wast experience in a wide variety of business applications. Particularly interested in software architecture and performance & tuning topics. Speaker at external conferences (e.g. Devoxx, GeeCon, JokerConf, XP Days, JBCNConf, JPrime, RigaDevDays, Voxxed, I T.A.K.E. Unconf, DevTalks, Bucharest Java User Group, Agile Tour) and occasionally technical writer (InfoQ, DZone, etc).
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers better runtime performance?
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).

Ivan Kelly
@ivankellySoftware Developer
Infinite Topic Backlogs with Apache Pulsar
Ivan Kelly
Software Developer at Streamlio
Ivan is a software developer for Streamlio, where he works on Apache Pulsar and Apache BookKeeper. He's been involved with BookKeeper since it’s early days in Yahoo Labs Barcelona and also worked on the predecessor systems to Pulsar at Yahoo. His expertize is in replicated logging, distributed systems, and networking though often not at the same time.
Infinite Topic Backlogs with Apache Pulsar
The talk is about how Apache Pulsar can have topic backlogs of unlimited size, opening up a whole array of Big Data use-cases that are not possible with other messaging systems. We also delve into tiered storage, which can make these massive backlogs very cheap.
Messaging systems are an essential part of any real-time analytics engine. A common pattern is to feed a user event stream into a processing engine, show the result to the user, capture feedback from the user, push the feedback back into the event stream, and so on. The quality of the result shown to the user is often a function of the amount of data in the event stream, so the more your event stream scales, the better you can serve your users.
Messaging systems have recently started to push into the field of long-term data storage and event stores, where you cannot compromise on retention. If data is written to the system, it must stay there.
Infinite retention can be challenging for a messaging system. As data grows for a single topic, you need to start storing different parts of the backlog on different sets of machines without losing consistency.
In this talk, I will describe how Pulsar uses Apache BookKeeper in its segment oriented architecture. BookKeeper provides a unit of consensus called a ledger. Pulsar strings together a number of BookKeeper ledgers to build the complete topic backlog. Each ledger in the topic backlog is independent of all previous ledgers with regards to location. This allows us to scale the size of the topic backlog simply by adding more machines. When the storage node is added to a Pulsar cluster, the brokers will detect it, and gradually start writing new data to the new node. There’s no disruptive rebalancing operation necessary.
Of course, adding more machines will eventually get very expensive. This is where tiered storage comes in. With tiered storage, parts of the topic backlog can be moved to cheaper storage such as Amazon S3 or Google Cloud Storage. I will also discuss the architecture of tiered storage, and how it is a natural continuation of Pulsar’s segment oriented architecture.
Finally, if you start storing data for a long time in Pulsar, you may want a means to query it. I will introduce our SQL implementation, based on the Presto query engine, which allows users to easily query topic backlog data, without having to read the whole thing.

Jaana B. Dogan
@rakyllSoftware Engineer
Servers are doomed to fail
Jaana B. Dogan
Software Engineer at Google
Jaana B. Dogan works on making Google production services more monitorable and debuggable. Previously, she worked on the Go programming language at Google and has a decade-long experience in building developer platforms and tools.
Servers are doomed to fail
Complexity in systems should be defeated if it is possible to do. But the default nature of our computer systems are complex and servers are doomed to fail. In this talk, we will go through new approaches in modern architectures to design and evaluate new computer systems.

Jacek Kunicki
@rucekSoftware Engineer
How (Not) to Use Reactive Streams in Java 9+
Jacek Kunicki
Software Engineer at Softwaremill
I’m a passionate software engineer living in the JVM land - mainly, but not limited to. I also tend to play with electronics and hardware. When sharing my knowlegde, I always keep in mind that a working example is worth a thousand words.
How (Not) to Use Reactive Streams in Java 9+
Did you try to implement one of the new java.util.concurrent.Flow.* interfaces yourself? Then you’re most probably doing it wrong.
The purpose of this talk is to show that implementing them yourself is far from trivial and to discuss the actual reasons why they have been included in the JDK.
Reactive Streams is a standard for asynchronous data processing in a streaming fashion with non-blocking backpressure. Starting from Java 9, they have become a part of the JDK in the form of the java.util.concurrent.Flow interfaces.
Having the interfaces at hand may tempt you to write your own implementations. Surprising as it may seem, that’s not what they are in the JDK for.
In this session, we’re going to go through the basic concepts of reactive stream processing and see how (not) to use the APIs included in JDK 9+. Plus we’re going to ponder the possible directions in which JDK’s Reactive Streams support may go in the future.

Jessica Kerr
@jessitronDeveloper
Collective Problem Solving in Music, Science, Software
Jessica Kerr
Developer at Atomist
Jessica Kerr is a developer and philosopher of software. At Atomist, she works on development and delivery automation: she writes code to let us write code to help us update and deliver code. At software conferences, she speaks about languages (Java, Scala, Clojure, Ruby, Elm, now TypeScript), paradigms (functional programming, DevOps), and now symmathesy. Her interests include resilience engineering, graceful systems, and Silly Things (which her daughters find on the internet). Find her work at blog.atomist.com, the podcast Greater than Code, and on twitter at her true name: @jessitron.
Collective Problem Solving in Music, Science, Software
There's a story to tell, about musicians, artists, philosophers, scientists, and then programmers.
There's a truth inside it that leads to a new view of work, that sees beauty in the painful complexity that is software development.
Starting from _The Journal of the History of Ideas_, Jessica traces the concept of an "invisible college" through music and art and science to programming. She finds the dark truth behind the 10x developer, a real definition of "Senior Developer," and a new name for our work and our teams.

Jon Bratseth
@jonbratsethDistinguished Architect
Big data serving: The last frontier. Processing and inference at scale in real-time
Jon Bratseth
Distinguished Architect at Verizon
Jon is a distinguished architect in Verizon, and the architect and one of the main contributors to of Vespa, the open big data serving engine. Jon has 20 years experience as architect and programmer on large distributed systems. He has a master in computer science from the Norwegian University of Science and Technology.
Big data serving: The last frontier. Processing and inference at scale in real-time
The big data world has mature technologies for offline analysis and learning from data, but have lacked options for making decisions in real time. This talk introduces vespa.ai - a mature platform for processing data and making inferences at large scale at end-user request time.
Offline and stream processing of big data sets can be done with tools such as Hadoop, Spark, and Storm, but what if you need to process big data at the time a user is making a request?
This talk introduces Vespa – the open source big data serving engine which targets the serving use cases of big data by providing response times in the tens of milliseconds at high request rates. Vespa allows you to search, organize and evaluate machine-learned models from e.g TensorFlow over large, evolving data sets. Among the applications powered by Vespa is the scoring/serving of ads in the worlds third largest ad exchange (Oath) and the online content selection at Yahoo, handling billions of daily queries over billions of documents. Vespa was recently open sourced at https://vespa.ai.

John De Goes
@jdegoesFunctional programmer
John De Goes
Functional programmer at De Goes Consulting
A mathematician by training but a software engineer by vocation, John A. De Goes has been professionally writing software for more than 20 years. John has contributed to dozens of open source projects written in functional programming languages. In addition to speaking at Strata, OSCON, BigData TechCon, NEScala, and many other conferences, John also published a variety of books on programming. Currently, John consults at De Goes Consulting, a consultancy committed to solving hard business problems using the power of pure functional programming.

Kai Schroeder
VP of Engineering
Continuous Delivery in the Real World
Kai Schroeder
VP of Engineering at The Workshop
Software Engineer and VP of Engineering at The Workshop. He holds a degree in physics and is passionate about data. He’s been living and working in Malaga for nearly 15 years.
Continuous Delivery in the Real World
More and more IT organizations are taking the step to Continuous Delivery instead of thinking in sprints or releases. It’s an important investment, and it opens a world of tangible opportunities. In this talk, we’ll see how the ability to deploy individual features influences the way we work, design applications, and perform as an organisation.

Lars Hupel
@larsr_hConsultant
Numeric Programming with Spire
Lars Hupel
Consultant at INNOQ
Lars is a consultant with INNOQ in Munich, Germany. He has been using Scala for quite a while now and is known as one of the founders of the Typelevel initiative which is dedicated to providing principled, type-driven Scala libraries in a friendly, welcoming environment. He is known to be a frequent conference speaker and active in the open source community, particularly in Scala. He also enjoys programming in and talking about Haskell, Prolog, and Rust.
Numeric Programming with Spire
Spire is a Scala library for fast, generic, and precise numerics. It allows us to write generic numeric algorithms, provides the ‘number tower’ and offers a lot of utilities you didn’t know you needed.
Numeric programming is a notoriously difficult topic. For number crunching, e.g. solving systems of linear equations, we need raw performance. However, using floating-point numbers may lead to inaccurate results. On top of that, in functional programming, we’d really like to abstract over concrete number types, which is where abstract algebra comes into play. This interplay between abstract and concrete and the fact that everything needs to run on finite hardware is what makes good library support necessary for writing fast & correct programs. Spire is such a library in the Typelevel Scala ecosystem. This talk will be an introduction to Spire, showcasing the ‘number tower’, real-ish numbers and how to obey the law.

Lili Cosic
@LiliCosicSoftware Engineer
An intro to Kubernetes operators
Lili Cosic
Software Engineer at Red Hat
Lili Cosic is a Software Engineer at RedHat, working on the operator-framework, enabling the community to make any application Kubernetes native. Previously she worked at Weaveworks on the Weave cloud integration with Kubernetes and before that, she found her passion for Kubernetes operators at Kinvolk helping develop the Habitat Operator. In her free time, Lili enjoys experimenting with Kubernetes, distributed systems, as well as writing operators for fun and not profit and dislikes writing about herself in the third person.
An intro to Kubernetes operators
An Operator is an application that encodes the domain knowledge of the application and extends the Kubernetes API through custom resources. They enable users to create, configure, and manage their applications. Operators have been around for a while now, and that has allowed for patterns and best practices to be developed.
In this talk, Lili will explain what operators are in the context of Kubernetes and present the different tools out there to create and maintain operators over time. She will end by demoing the building of an operator from scratch, and also using the helper tools available out there.

Liz Keogh
@lunivoreLean and Agile Consultant
Leadership at Every Level
Liz Keogh
Lean and Agile Consultant
Liz Keogh is a Lean and Agile consultant based in London. She is a well-known blogger and international speaker, a core member of the BDD community and a passionate advocate of the Cynefin framework and its ability to change mindsets. She has a strong technical background with 20 years’ experience in delivering value and coaching others to deliver, from small start-ups to global enterprises. Most of her work now focuses on Lean, Agile and organizational transformations, and the use of transparency, positive language, well-formed outcomes and safe-to-fail experiments in making change innovative, easy and fun.
Leadership at Every Level
Leadership is easy when you're a manager, or an expert in a field, or a conference speaker! In a Kanban organisation, though, we "encourage acts of leadership at every level". In this talk, we look at what it means to be a leader in the uncertain, changing and high-learning environment of software development. We learn about the importance of safety in encouraging others to lead and follow, and how to get that safety using both technical and human practices; the necessity of a clear, compelling vision and provision of information on how we're achieving it; and the need to be able to ask awkward and difficult questions... especially the ones without easy answers.

Łukasz Gebel
Senior Software Engineer
Machine Learning: The Bare Math Behind Libraries
Łukasz Gebel
Senior Software Engineer at TomTom
Software engineer at TomTom by day, machine learning enthusiast at night. My leading technology is Java and Java-based frameworks. On a daily basis, I work on designing, implementing and deploying distributed systems that work in cloud environments, such as Microsoft Azure and AWS. I'm interested in classification problems and multi-agent systems. I love to learn, read books and play football – in no particular order.
Machine Learning: The Bare Math Behind Libraries
During this presentation, we will answer how much you’ll need to invest in a superhero costume to be as popular as Superman. We will generate a unique logo which will stand against the ever popular Batman and create new superhero teams. We shall achieve it using linear regression and neural networks.
Machine learning is one of the hottest buzzwords in technology today as well as one of the most innovative fields in computer science – yet people use libraries as black boxes without basic knowledge of the field. In this session, we will strip them to bare math, so next time you use a machine learning library, you’ll have a deeper understanding of what lies underneath.
During this session, we will first provide a short history of machine learning and an overview of two basic teaching techniques: supervised and unsupervised learning.
We will start by defining what machine learning is and equip you with an intuition of how it works. We will then explain the gradient descent algorithm with the use of simple linear regression to give you an even deeper understanding of this learning method. Then we will project it to supervised neural networks training.
Within unsupervised learning, you will become familiar with Hebb’s learning and learning with concurrency (winner takes all and winner takes most algorithms). We will use Octave for examples in this session; however, you can use your favourite technology to implement presented ideas.
Our aim is to show the mathematical basics of neural networks for those who want to start using machine learning in their day-to-day work or use it already but find it difficult to understand the underlying processes. After viewing our presentation, you should find it easier to select parameters for your networks and feel more confident in your selection of network type, as well as be encouraged to dive into more complex and powerful deep learning methods.

Dr Maggie Lieu
@Space_MogResearch Fellow
The big data Universe. Literally.
Dr Maggie Lieu
Research Fellow at European Space Agency
Maggie Lieu is a research fellow at the European Space Agency in Madrid where she is currently working on projects in preparation for Euclid, a space-based optical/IR telescope planned for launch in 2021. Maggie's research is focussed on developing Bayesian and machine learning tools to help us understand the nature of dark matter and dark energy and to constrain cosmology (the parameters that describe our Universe) from clusters of galaxies.
The big data Universe. Literally.
The advancement of technology in the last decade or so has allowed astronomy to see exponential growth in data volumes. ESA's space telescope Euclid will gather high-resolution images of a third of the sky, ~850GB of data downloaded daily for 6 years, by 2032 ground-based telescope LSST will have generated 500PB of data and the radio telescope SKA will be producing more data per second than the entire internet worldwide. This talk will address the questions of what current techniques exist to address big data volumes, how the astronomical community will prepare for this big data wave, and what other challenges lie ahead?

Marco Giovannini
Infrastructure Engineer
Reactive Infrastructure for functional code
Marco Giovannini
Infrastructure Engineer at MoPlay
Marco is Infrastructure Engineer in Addison Global creators of MoPlay. He lives and breaths automation in all aspects of his life. Automation is a great advantage because it let him work on the core of the problem, and gives a boring job to computers.
He is a cloud and open source lover. If you have stickers he can fit on his laptop - He will grab it from you ;)
Reactive Infrastructure for functional code
What happens in an afterlife of my code? Will it run smooth? Why does this happen to my code? Can I see it? How it scales?
There are hundreds of software existential questions many developers ask themselves and others around. We are building reactive environments that address many issues we heard.
We will briefly cover Infrastructure as Code of our Reactive environment, networking and connectivity that makes integrations easy, security above compliance, how centralised logs and monitoring in the cloud can help to understand causes, how we control scaling code and show you the life of example functional application in the reactive environment.
Join us to hear about patterns that we found useful to give happy answers.

Marcus Biel
@MarcusBielDirector of Customer Experience
Java, Turbocharged
Marcus Biel
Director of Customer Experience at Red Hat
Marcus Biel (@MarcusBiel) works as Director of Customer Experience for Red Hat. He is a well-known software craftsman, Java influencer and Clean Code Evangelist. Besides this, he works as a technical reviewer for renowned Java books such as Effective Java, Core Java SE 9 for the Impatient or Java by Comparison.
Aside from this, Marcus is an individual member of the Java Community Process (JCP), as well as a member of the association of the German Java User Groups e.V. (iJUG) and the local Java and software craftsmanship communities in his hometown, Munich.
Marcus has worked on various Java-related projects since 2001, mainly in the financial and telecommunications industries. In 2007 he graduated with a degree in computer science from the Augsburg University of Applied Sciences in Germany. In 2008, Marcus successfully completed his Sun Certified Java Programmer certification (SCP 6.0), of which he is still very proud today.
Java, Turbocharged
Over the last twenty years, there has been a paradigm shift in software development: from meticulously planned release cycles to an experimental way of working in which lead times are becoming shorter and shorter.
How can Java ever keep up with this trend when we have Docker containers that are several hundred megabytes in size, with warm-up times of ten minutes or longer? In this talk, I'll demonstrate how we can use Quarkus so that we can create super small, super fast Java containers! This will give us better possibilities for scaling up and down - which can be a game-changer, especially in a serverless environment. It will also provide the shortest possible lead times, as well as a much better use of cloud performance with the added bonus of lower costs.

Mario Vázquez
@VdeVazquezData Engineer
Near-free serverless data-pipelines on Google Cloud Platform
Mario Vázquez
Data Engineer at The Cocktail
Mario Vazquez has experience with many different clients specialised in the most diverse fields (banking, travel...), making technical implementation audits in web and app, web pages tagging, extraction, transformation and data loading.
He is graduated in Services and Applications Computer Engineering from the University of Valladolid.
Near-free serverless data-pipelines on Google Cloud Platform
Serverless, serverless, serverless ... everyone wants it, everyone talks about it, but how much do they really have it? And at what price? In this talk, we will show how to design, develop and deploy near-free serverless data-pipeline in Google Cloud Platform.

Mark de Brauw
Founder
Crossing the bridge - how do we link end-user-computing and formal tech for data savvy teams
Mark de Brauw
Founder at Mesoica
Mark is founder of Mesoica, a data management firm working for the financial industry. Mark has 15 years of experience working in asset and wealth management firms and has focussed on how to make systems communicate better.
Crossing the bridge - how do we link end-user-computing and formal tech for data savvy teams
With Excel or custom tooling (Python, R, etc) there's flexibility to build data processing and preparation pipelines. Getting these to production level is often a different story as traditional or formal IT organisations are not well equipped to handle this kind of development.
In this talk, I'll show how we have combined SQL and NoSQL storage engines to create flexible and production ready data pipelines that can deal with unstructured data flows in an efficient manner.

Markus Eisele
@myfearDirector of Developer Advocacy
Streaming to a New Jakarta EE
Markus Eisele
Director of Developer Advocacy at Lightbend
Markus is a Java Champion, former Java EE Expert Group member, founder of JavaLand, reputed speaker at Java conferences around the world, and a very well known figure in the Enterprise Java world.
With more than 16 years of professional experience in the industry, he designed and developed large Enterprise grade applications for Fortune 500 companies. As an experienced team lead and architect, he helped implement some of the largest integration projects in automotive, finance and insurance companies.
Streaming to a New Jakarta EE
The world is moving from a model where data sits at rest, waiting for people to make requests of it, to where data is constantly moving and streams of data flow to and from devices with or without human interaction. Decisions need to be made based on these streams of data in real-time, models need to be updated, and intelligence needs to be gathered. In this context, our old-fashioned approach of CRUD REST APIs serving CRUD database calls just doesn't cut it. It's time we moved to a stream-centric view of the world.

Martin Thompson
@mjpt777High Performance Gangster
Interaction Protocols: It's all about good manners
Martin Thompson
High Performance Gangster
Martin is a Java Champion with over 2 decades of experience building complex and high-performance computing systems. He is most recently known for his work on Aeron and SBE. Previously at LMAX he was the co-founder and CTO when he created the Disruptor. Prior to LMAX Martin worked for Betfair, three different content companies wrestling with the world largest product catalogues, and was a lead on some of the most significant C++ and Java systems of the 1990s in the automotive and finance domains.
Interaction Protocols: It's all about good manners
Distributed systems collaborate to achieve collective goals via a system of rules. Rules that affords good hygiene, fault tolerance, effective communication and trusted feedback. These rules form protocols which enable the system to achieve its goals.
Distributed and concurrent systems can be considered a social group that collaborates to achieve collective goals. In order to collaborate a system of rules must be applied, that affords good hygiene, fault tolerance, and effective communication to coordinate, share knowledge, and provide feedback in a polite trusted manner. These rules form a number of protocols which enable the group to act as a system which is greater than the sum of the individual components.
In this talk, we will explore the history of protocols and their application when building distributed systems.

Michael Barton
@mrb_bartonSeñor Software Developer
Developing data leak investigation tools at the Guardian
Michael Barton
Señor Software Developer at The Guardian
Michael, aka Miguelito or Don Barton, works in the Editorial Tools team developing the software used by hundreds of journalists every day to publish award-winning content. Over the past two years, he has also been working to develop a platform for searching, analysing and collaborating on data leak driven investigations.
He formerly worked for ITRS in Málaga on the Valo streaming analytics project and enjoys digital signal processing, nerdy in-depth conversations about transport infrastructure and doom metal.
Developing data leak investigation tools at the Guardian
As data leaks move into the terabytes, journalists need tools to search, analyse and collaborate on their investigations. We will cover the technical lessons learnt over two years of development at the Guardian as we built our platform in both the cloud and running entirely air-gapped offline.
We will introduce GIANT, the Guardian’s new platform for searching, analysing and collaborating on data leak backed investigations.
With the size of leaks increasing (Edward Snowden: 55,000 files, the Paradise Papers: 13.4 million), the Guardian has built its own platform for analysis which has already seen success on several projects, most notably the Daphne Project which continues the work of the journalist Daphne Caruana Galizia.
In the talk we will cover how we designed our data model to effectively handle “any” possible file type and scale up to terabytes of stored data. We will discuss how using Neo4j we are able to reconstruct the threads of conversation between individuals and companies identified in the data and the surprising limits that come with using a graph database as our storage system of record.
We will also dive into our use of Elasticsearch, in particular how best to support leaks containing multiple languages and how we were able to add full Russian and Arabic language support to an existing dataset whilst the journalists continued their investigation using the tool.
We will also discuss our extractors, the system of plugins that process the files when we receive them. We will cover the lessons learned as we moved from calling in-process code in the JVM to Docker and containerisation to not only take advantage of the wide ecosystem of open source processing tools but also effectively scale out our computation both in AWS and also in our completely offline air-gapped deployment for more sensitive data.
Finally, we will also discuss the value of direct working relations between developers and journalists. This leads us to a change in how we developed our tooling, moving more towards building a secure platform upon which other more specialist tools can be written. We will show a great example of this with “Laundrette”, a new tool that lets data journalists add structure to hundreds of thousands of documents quickly.

Milan Savić
@MilanSavic14Software Engineer
Axon Server went RAFTing
Milan Savić
Software Engineer at AxonIQ
Milan Savić is Software Engineer at AxonIQ. He has experience with various software projects ranging from chemical analyzers to contactless mobile payment systems. In some of those projects, CQRS and Event Sourcing came as a natural solution, but things had to be built from scratch almost every time. Finding out about AxonFramework got him interested in being a part of the solution. In March 2018 he joined AxonIQ team on a mission to build tools which would help others in building event-driven, reactive systems.
Axon Server went RAFTing
RAFT protocol is a well-known protocol for consensus in Distributed Systems. Want to learn how consensus is achieved in a system with a large amount of data such as Axon Server’s Event Store? Join this talk to hear about all specifics regarding data replication in highly available Event Store!
Axon is a free and open source Java framework for writing Java applications following DDD, event sourcing, and CQRS principles. While especially useful in a microservices context, Axon provides great value in building structured monoliths that can be broken down into microservices when needed.
Axon Server is a messaging platform specifically built to support distributed Axon applications. One of its key benefits is storing events published by Axon applications. In not so rare cases, the number of these events is over millions, even billions. Availability of Axon Server plays a significant role in the product portfolio. To keep event replication reliable we chose RAFT protocol for consensus implementation of our clustering features.
In short, consensus involves multiple servers agreeing on values. Once they reach a decision on a value, that decision is final. Typical consensus algorithms make progress when any majority of their servers is available; for example, a cluster of 5 servers can continue to operate even if 2 servers fail. If more servers fail, they stop making progress (but will never return an incorrect result).
Join this talk to learn why we chose RAFT; what were our findings during the design, the implementation, and testing phase; and what does it mean to replicate an event store holding billions of events!

Moisés Macero
@moises_maceroSoftware Developer & Architect
The Six Pitfalls of building a Microservices Architecture (and how to avoid them)
Moisés Macero
Software Developer & Architect at The Practical Developer
Moisés is a Software Developer and Architect, and the author of the blog ThePracticalDeveloper.com and the book Learn Microservices with Spring Boot. He has been developing software since he was a kid, when his parents bought him a Sinclair Spectrum ZX and he started playing around with code. Since then, he has been involved in development, design, and architecture, and has worked in waterfall and agile organizations. His career started in Málaga, where he worked for big corporations and also small startups. He moved to Amsterdam in 2015 and is now working as Solutions Architect for a project based on Java and Spring Boot Microservices. Moisés has learned to be a pragmatic developer and architect and likes sharing his observations with others.
The Six Pitfalls of building a Microservices Architecture (and how to avoid them)
Thinking of moving to Microservices? Watch out! That quest is full of traps, social traps. If you are not able to handle it, you may be blocked by meetings, frustration, endless challenges that will make you miss the monolith. In this talk, I share my experience and mistakes, so you can avoid them.
Creating or migrating to a Microservices architecture might easily become a big mess, not only due to technical challenges but mostly because of human factors: it’s a major change in the software culture of a company. In this talk, I’ll share my past experience as the technical lead of an ambitious Microservices-based product, I’ll go through the parts we struggled with, and give you some advice on how to deal with what I call the Six Pitfalls:
- The Common Patterns Phobia
- The Book Club Cult
- The Never-Decoupled Story
- The Buzz Words Syndrome
- The Agile Trap
- The Conway’s Law Hackers

Nicolas Kuhaupt
Research Data Scientist
Getting started with Deep Reinforcement Learning
Nicolas Kuhaupt
Research Data Scientist at Fraunhofer IEE
Nicolas Kuhaupt is working on projects in the field of Big Data and Artificial Intelligence with the goal to shift forward the digitalization of the Energy Transition. In this endeavor, he has observed the upcoming of Deep Reinforcement Learning (DRL) and has already implemented DRL algorithms. He is thrilled by the possibilities DRL has to offer and is looking forward to spreading his passion for it. Additionally, he loves to join data conferences and is looking forward to meeting interesting people at JOTB and talk about data.
Getting started with Deep Reinforcement Learning
Reinforcement Learning is a hot topic in Artificial Intelligence (AI) at the moment with the most prominent example of AlphaGo Zero. It shifted the boundaries of what was believed to be possible with AI. In this talk, we will have a look into Reinforcement Learning and its implementation.
Reinforcement Learning is a class of algorithms which trains an agent to act optimally in an environment. The most prominent example is AlphaGo Zero, where the agent is trained to place tokens on the board of Go in order to win the game. AlphaGo Zero has won against the world champion which was thought to be impossible at that time. This was enabled by combining Reinforcement Learning with Deep Neural Networks and is today known as Deep Reinforcement Learning. This has shifted the frontier of Artificial Intelligence and enabled multiple complex use cases, among them controlling the cooling devices in the server rooms by google. Applying Deep Reinforcement Learning saved them several million in power costs. In this talk, we will understand the basics of Deep Reinforcement Learning and implement a simple example. We will have a look at OpenAIs gym which is the defacto standard for Reinforcement Learning environments. This will enable the audience to implement both an environment and Reinforcement Learning agent on their own.

Oleg Šelajev
@shelajevDeveloper Advocate
GraalVM: Run Programs Faster Everywhere
Oleg Šelajev
Developer Advocate at Oracle
Oleg Šelajev is a developer advocate at Oracle Labs working on GraalVM - the high-performance embeddable polyglot virtual machine. He organizes VirtualJUG, the online Java User Group, and a GDG chapter in Tartu, Estonia. In 2017 became a Java Champion.
GraalVM: Run Programs Faster Everywhere
GraalVM is a high-performance runtime for dynamic, static, and native languages. GraalVM supports Java, Scala, Kotlin, Groovy, and other JVM-based languages. At the same time, it can run the dynamic scripting languages JavaScript including node.js, Ruby, R, and Python. In this session we'll talk about the performance boost you can get from running your code on GraalVM, look at the examples of running typical web-applications with it, enhancing them with code in other languages, creating native images for incredibly fast startup and low memory overhead for your services. GraalVM offers you the opportunity to write the code in the language you want, and run the resulting program really fast.

Oleh Dokuka
@OlehDokukaReactive Guy
RSocket - Future Reactive Application Protocol
Oleh Dokuka
Reactive Guy
Mainly Java Software Engineer / Consultant focused on distributed systems development adopting Reactive Manifesto and Reactive Programming techniques. Open Source geek and active contributor to Project Reactor / RSocket. Along with that, Public Speaker and Author of the book "Reactive Programming is Spring 5.0":
RSocket - Future Reactive Application Protocol
Are you doing microservices? Got exhausted of slow REST? Got mad of unreliable gRPC? An answer is RSocket. RSocket in a new network protocol with reactive streams semantic. It will make your system super fast and resilient. Come and learn why RSocket is the future of any cross-services communication
The new generation of cross-service communication is coming and called RSocket. RSocket is a new protocol that embracing Reactive Streams semantic into cross-service messaging.
This protocol enables backpressure-control and allows building canonical Reactive-System. Even though the protocol offers asynchronous messages’ streaming, there have already been a few competitors in this area by that time. One of those competitors is well-known gRPC. In this session, we are going to learn why RSocket is innovation solution for cross-server communication, can we compare it with gRPC at all and if can, what are the key differences between RSocket and gRPC and why we have to start using RSocket today.

Paolo Carta
@cl4merFreelancer
Serverless Continuous Delivery of Microservices on Kubernetes with Jenkins X
Paolo Carta
Freelancer at Interdiscount
Born and grown in the beautiful Sardinia in Italy, I moved to Zurich to complete my studies at the ETH (Swiss Federal Institute of Technology).
After working on delay tolerant networks with Android devices I focused on Web development and scalable and resilient software architectures on the cloud.
Currently working as a Freelancer at Interdiscount, the market leader for electronics in Switzerland.
Serverless Continuous Delivery of Microservices on Kubernetes with Jenkins X
Jenkins X, the innovative K8s native CI/CD project is moving extremely fast. Recently it is embracing the Knative project and Prow for K8s in order to build and deploy polyglot apps using serverless jobs. This new approach might be the future of CD in the cloud for performance and reducing costs.
In the last few years, we witnessed big changes in how we actually build, deploy and run applications with the rise of Microservices, Containers, Kubernetes and Serverless frameworks. Those amazing improvements need a cultural shift based on continuous improvement in order to deliver business value and delight our customers.
But how could a team achieve this ambitious goal?
This talk will introduce the attendees to a revolutionary open source project, called Jenkins X Serverless, which attempts to achieve this goal. It is a reimagined CI/CD Ecosystem for Kubernetes built around Jenkins X Serverless, which leverages Prow and Knative serverless functions.
After this talk, attendees will be able to develop effectively in a cloud native way in any language on any kubernetes cluster!
Let’s be finally Agile!

Philip Brisk
Researcher
Acoustic Time Series in Industry 4.0: Improved Reliability and Cyber-Security Vulnerabilities
Philip Brisk
Researcher at University of California, Riverside
Philip Brisk received the B.S., M.S., and Ph.D. Degrees, all in Computer Science, from the University of California, Los Angeles (UCLA) in 2002, 2003, and 2006 respectively. From 2006-2009 he was a postdoctoral researcher at EPFL in Switzerland. Since 2009, he has been with the Department of Computer Science and Engineering at the University of California, Riverside. His research interests include the application of computer engineering principles to biological instrumentation, FPGAs and reconfigurable computing, and efficient implementation of computer systems. He is a Senior Member of the ACM and the IEEE.
Acoustic Time Series in Industry 4.0: Improved Reliability and Cyber-Security Vulnerabilities
Industry 4.0, aka the "Fourth Industrial Revolution," refers to the computerization of manufacturing. One important aspect of Industry 4.0 is the ability to monitor the health and reliability of a physical manufacturing plant using low-cost IoT sensors. For example, machine learning models can be trained to predict the physical degradation of a manufacturing system as a function of acoustic measurements obtained from strategically placed microphones; however, the same acoustic measurements can be used to reverse engineer proprietary information about the manufacturing process and/or precisely what is being manufactured at the time of recording. Thus, improved reliability and fault tolerance is achieved at the cost of what appears to be an unprecedented new class of security vulnerabilities related to the acoustic side channel.
As a case study, we report a novel acoustic side channel attack against a commercial DNA synthesizer, a commonly used instrument in fields such as synthetic biology. Using a smart phone-quality microphone placed on or in the near vicinity of a DNA synthesizer, we were able to determine with 88.07% accuracy the sequence of DNA being produced; using a database of biologically relevant known-sequences, we increased the accuracy of our model to 100%. An academic or industrial research project may use the synthetic DNA to engineer an organism with desired traits or functions; however, while the organism is still under development, prior to publication, patent, and/or copyright, the research remains vulnerable to academic intellectual property theft and/or industrial espionage. On the other hand, this attack could also be used for benevolent purposes, for example, to determine whether a suspected criminal or terrorist is engineering a harmful pathogen. Thus, it is essential to recognize both the benefits and risks inherent to the cyber-physical systems that will inevitably control Industry 4.0 manufacturing processes and to take steps to mitigate them whenever possible.

Piotr Czajka
Senior Software Engineer
Machine Learning: The Bare Math Behind Libraries
Piotr Czajka
Senior Software Engineer at TomTom
Programmer, retired mage, bookworm, storyteller and liberal arts devotee.
I'm into language semantics, its understanding and impact on the way people think. I love both natural and programming languages - professionally my heart belongs to Java, but I cheat on her with Python, Scala and, occasionally, other beautiful languages. In addition to my work at TomTom as a software engineer, I'm keen on artificial intelligence, mainly for natural language understanding. If we are to reach technological singularity, we better get on it!
Machine Learning: The Bare Math Behind Libraries
During this presentation, we will answer how much you’ll need to invest in a superhero costume to be as popular as Superman. We will generate a unique logo which will stand against the ever popular Batman and create new superhero teams. We shall achieve it using linear regression and neural networks.
Machine learning is one of the hottest buzzwords in technology today as well as one of the most innovative fields in computer science – yet people use libraries as black boxes without basic knowledge of the field. In this session, we will strip them to bare math, so next time you use a machine learning library, you’ll have a deeper understanding of what lies underneath.
During this session, we will first provide a short history of machine learning and an overview of two basic teaching techniques: supervised and unsupervised learning.
We will start by defining what machine learning is and equip you with an intuition of how it works. We will then explain the gradient descent algorithm with the use of simple linear regression to give you an even deeper understanding of this learning method. Then we will project it to supervised neural networks training.
Within unsupervised learning, you will become familiar with Hebb’s learning and learning with concurrency (winner takes all and winner takes most algorithms). We will use Octave for examples in this session; however, you can use your favourite technology to implement presented ideas.
Our aim is to show the mathematical basics of neural networks for those who want to start using machine learning in their day-to-day work or use it already but find it difficult to understand the underlying processes. After viewing our presentation, you should find it easier to select parameters for your networks and feel more confident in your selection of network type, as well as be encouraged to dive into more complex and powerful deep learning methods.

Radek Ostojski
@radekostojskiHead of Infrastructure
Reactive Infrastructure for functional code
Radek Ostojski
Head of Infrastructure at MoPlay
Radek is Head of Infrastructure in Addison Global creators of MoPlay. He loves automation in digital life in the cloud and the manual process of analogue photography in foggy days on morning cloud.
He is a fan of Linux for two decades and his days are happy whenever he can be a purist using technology.
Reactive Infrastructure for functional code
What happens in an afterlife of my code? Will it run smooth? Why does this happen to my code? Can I see it? How it scales?
There are hundreds of software existential questions many developers ask themselves and others around. We are building reactive environments that address many issues we heard.
We will briefly cover Infrastructure as Code of our Reactive environment, networking and connectivity that makes integrations easy, security above compliance, how centralised logs and monitoring in the cloud can help to understand causes, how we control scaling code and show you the life of example functional application in the reactive environment.
Join us to hear about patterns that we found useful to give happy answers.

Roland Kuhn
@rolandkuhnCTO at Actyx
What if You Need Reliability Comparable to Paper?
Roland Kuhn
CTO at Actyx
Dr. Roland Kuhn is CTO and co-founder of Actyx, a Munich-based company that makes state of the art software technology accessible to small and midsize factories. He also is the main author of Reactive Design Patterns and previously led the Akka team at Lightbend.
What if You Need Reliability Comparable to Paper?
In the manufacturing industry downtime is very expensive, therefore most small and midsize factories are still managed using paper-based processes. The problem space is perfectly suited for the microservices approach: well-defined and locally encapsulated responsibilities, collaboration and loose coupling between different links in the chain, rapid evolution of individual pieces for the purpose of optimising business outcomes. But how can we operate microservices such that they can deliver the resilience of paper? How can we leverage the locality of process data and benefit from high bandwidth and low latency communication in the Internet of Things?
This talk explores the radical approach of operating microservices in a peer-to-peer network on the factory shop-floor, using event sourcing as the only means of communication and observation. We discuss the consequences of going all in on availability and partition tolerance. In particular consider eventual consistency and its impact on replacing nodes, upgrading services, and evolving event schemas. And we see how event sourcing can help understand the behaviour of such an uncompromisingly distributed system and enable powerful testing—both before and after hitting an issue in production.

Sara-Jane Dunn
@EssJayDScientist
Biological Logic
Sara-Jane Dunn
Scientist at Microsoft Research, Cambridge
Sara-Jane Dunn is a Scientist at Microsoft Research, Cambridge. She studied Mathematics at the University of Oxford, graduating with a MMath in 2007. She remained in Oxford for her doctoral research, as part of the Computational Biology group at the Department of Computer Science. In 2012, she joined Microsoft Research as a postdoctoral researcher, before transitioning to a permanent Scientist role in 2014. In 2016, she was invited to become an Affiliate Researcher of the Wellcome Trust-Medical Research Council Stem Cell Institute, University of Cambridge. Her research focuses on uncovering the fundamental principles of biological information-processing, particularly investigating decision-making in Development.
Biological Logic
The 20th Century was transformed by the ability to program on silicon, an innovation that made possible technologies that fundamentally revolutionised how the world works. As we face global challenges in health, food production, and in powering an increasingly energy-greedy planet, it is becoming clear that the 21st Century could be equally transformed by programming an entirely different material: biological matter. The power to program biology could transform medicine, agriculture, and energy, but relies, fundamentally, on an understanding of biochemistry as molecular machinery in the service of biological information-processing. Unlike engineered systems, however, living cells self-generate, self-organise, and self-repair, they undertake massively parallel operations with slow and noisy components in a noisy environment, they sense and actuate at molecular scales, and most intriguingly, they blur the line between software and hardware. Understanding this biological computation presents a huge challenge to the scientific community. Yet the ultimate destination and prize at the culmination of this scientific journey is the promise of revolutionary and transformative technology: the rational design and implementation of biological function, or more succinctly, the ability to program life.

Sergey Bykov
@sergeybykovPrincipal Software Development Lead at Microsoft
Drinking from the firehose, with virtual streams and virtual actors
Sergey Bykov
Principal Software Development Lead at Microsoft
Joined Microsoft in 2001 and worked in several product groups, such as e-Business Servers, Embedded Devices, and Online Services, before moving to Research in 2008 to incubate Orleans. Sergey continues leading the Orleans team after open-sourcing the project, now within Microsoft Studios.
Drinking from the firehose, with virtual streams and virtual actors
Event Stream Processing is a popular paradigm for building robust and performant systems in many different domains, from IoT to fraud detection to high-frequency trading. Because of the wide range of scenarios and requirements, it is difficult to conceptualize a unified programming model that would be equally applicable to all of them. Another tough challenge is how to build streaming systems with cardinalities of topics ranging from hundreds to billions while delivering good performance and scalability.
In this session, Sergey Bykov will talk about the journey of building Orleans Streams that originated in gaming and monitoring scenarios, and quickly expanded beyond them. He will cover the programming model of virtual streams that emerged as a natural extension of the virtual actor model of Orleans, the architecture of the underlying runtime system, the compromises and hard choices made in the process. Sergey will share the lessons learned from the experience of running the system in production, and future ideas and opportunities that remain to be explored.

Trent Walker
Head of Application Development
Big Data On Data You Don’t Have
Trent Walker
Head of Application Development at MSCI
As the Head of Application Development and the business head of Platform, Trent Walker is responsible for the development of the software products offered as part of MSCI’s analytics business as well as leading the business side of MSCI’s new Open Platform technology. Previously, Trent was a Managing Director at Barclays working in Prime Services Front Office, responsible for Risk and Margin Globally for Equity PB, Futures, Synthetics, Clearing, Repo, FX PB, and Barclays Global Netting Agreement. Previous to that role Trent worked as the CTO at BlueCrest Capital and was a Managing Director at Credit-Suisse in charge of Technology for Fixed Income Derivatives and the Client Website LOCuS. Trent was granted a Ph.D. in Mathematics at the University of California at Berkeley and taught briefly at UC Santa Barbara before joining the financial industry.
Big Data On Data You Don’t Have
Traditional Big Data is done on Data you have. You load the data into a repository and perform map reduce or other style calculations on the data. However, certain industries need to perform complex operations on data you might not have. Data you can acquire, Data that can be shared with you, and Data that you can model are all types of data you may not have but may need to integrate instantly into a complex data analysis. Problem is: you may not even know you need this data until deep into the execution stack at runtime. This talk discusses a new functional language paradigm for dealing naturally with data you don’t have and about how to make all data first-class citizens, regardless of whether you have it or you don’t, and we will give a demo of a project written in scala to deal exactly with this issue.

Trustin Heuiseung Lee
@trustinSoftware Engineer
Armeria: The Only Thrift/gRPC/REST Microservice Framework You'll Need
Trustin Heuiseung Lee
Software Engineer at Line+
Trustin Lee is a software engineer who is often known as the founder of Netty project, the most popular asynchronous networking framework in JVM ecosystem. He enjoys designing frameworks and libraries which yield the best experience to developers. At LINE+ corporation, the company behind ‘LINE’ the top mobile messenger in Japan, Taiwan and Thailand, he builds various open-source software, such as a microservice framework Armeria and a distributed configuration repository Central Dogma, to facilitate the adoption of microservice architecture.
Armeria: The Only Thrift/gRPC/REST Microservice Framework You'll Need
The founder of Netty introduces a new microservice framework ‘Armeria’. It is unique because it 1) has Netty-based high-perf HTTP/2 implementation, 2) lets you run gRPC, Thrift, REST, even Servlet webapp on single TCP port in single JVM, and 3) integrates with Spring Webflux and Reactive Streams.
Armeria is a Netty-based open-source Java microservice framework which provides an HTTP/2 client and server implementation. It is different from any other RPC frameworks in that it supports both gRPC and Thrift. It also supports RESTful services based on Reactive Streams API and even a legacy web applications that run on Tomcat or Jetty, allowing you to mix and match different technologies into a service which means you do not need to launch multiple JVMs or open multiple TCP/IP ports just because you have to support multiple protocols or migrate from one to another.
In this session, Trustin Lee, the founder of Netty project and Armeria, shows:
- What Armeria is.
- How to serve gRPC, Thrift and RESTful services on a single TCP/IP port and a single JVM.
- How to make your legacy Tomcat or Jetty-based application and modern reactive RPC service coexist.
- How to use Armeria’s universal decorator API to apply common functionalities such as circuit breaker, DNS-based service discovery, distributed tracing and automatic retry, regardless of the protocol, which was previously impossible with other RPC frameworks which focused on a single protocol.

Victor Tuson
VP of Engineering
When Cloud Native meets the Financial Sector
Victor Tuson
VP of Engineering at Ebury
Victor is the VP of Engineering for Ebury, comes from managing international software development teams creating operating systems and applications for Mobiles and Cloud solutions. More recently worked as Director of Engineering at Bitnami (YC'13) and VP of Commercial Engineering at Canonical (sponsors of Ubuntu Linux). Victor is passionate about System Reliability and he is a go and Kubernetes enthusiast.
When Cloud Native meets the Financial Sector
We live in our own bubble of microservices and endlessly horizontal scaling infrastructure, but there is still critical infrastructure that runs the world of financial systems depending on Windows boxes, FTP servers, and single-threaded protocols. This talk is about how to glue these two worlds together, what works for us and what doesn't.

Vlad Vorobev
Solution Architect
How do we deploy? From Punched cards to Immutable server pattern
Vlad Vorobev
Solution Architect at EPAM Systems
Working in IT for more than 12 years, Vlad has tried roles of developer, solution architect and developer's leader and manager. During the whole career, Vlad was most focused on development processes, the conjunction of technologies and people. Lately adding cloud architectures and Dev/Ops practices to the list of interests.
How do we deploy? From Punched cards to Immutable server pattern
A short retrospective of deployment approaches evolution and the key features of the most up-to-date concepts.

Zach Zimmerman
PhD student
From Billions to Quintillions: Paving the way to real-time motif discovery in time series
Zach Zimmerman
PhD student at University of California, Riverside
Zach Zimmerman is a 4th year PhD student at University of California, Riverside. His research is focused on scalable time series data mining, in particular, using GPUs, distributed computing, and machine learning to enable scaling of time series motif discovery. His work is being used across multiple domains by researchers in both academia and industry. He has industry experience through internships with Google, Intel, and Nvidia working on various projects, mostly in the high-performance computing space.
From Billions to Quintillions: Paving the way to real-time motif discovery in time series
The matrix profile is a tool which encodes the distance of each subsequence in a time series to its nearest neighbor, this “all pairs nearest neighbor” is very useful for finding motifs and anomalies in just about any time series data and is being used by many in both industry and academia.
In this talk, I will explain the path of optimizations we took and the lessons we learned in developing a scalable solution for this “all pairs” problem in time series as well as introduce our current work in establishing a real-time, streaming approximation.



