Interview with Rikard König, written by Jasmine Shiel, Administration & Communications.

Have you ever wished that you could be your own boss, make that random thought you had into a successful business?

Do you remember those moments of “what if…”? Before it faded away into doubt and the realization of the difficulties of turning a thought into a company of people…
But what you had the nudge to take it further?

Today we are a company of 23 at Ekkono, thriving from the thoughts of Rikard König and his research.

Rikard, do you remember the first time you thought about what is now known as Synthesis? Where were you, what were you doing at the time?
“Actually, the story of Synthesis, or as it was first known, Jon’s Correlation Engine (JCE), goes way back to the early days of Ekkono. Just four months in, around March 2017, we were still at the stage where every idea felt like a spark ready to ignite something big. We were wrapping up one of our late-night video chats – you know, the kind that happens after the kids are asleep and you can finally think a bit clearer. That’s when Jon (co-founder and CEO at the time) threw out this idea that really stuck with us.

We were all buzzing about what Ekkono’s tech could do, especially its knack for learning directly on edge devices. Then, Jon, out of the blue, wonders if we could take it a step further

“What if,” he says, “we could send the things we learn on each device back up to the cloud. Could we then correlate the findings to spot outliers and unhealthy devices?”

.That’s how Jon’s Correlation Engine came into being. It was just a name back then, but it was the seed that grew into what Synthesis is today.

And, you know, it was a time when explaining the need for AI on the edge to our customers was a big part of our job. So, the idea of sending learning back to the cloud was really groundbreaking. It wasn’t just about making our devices smarter; it was about connecting those smarts together in a way nobody had thought to do before.

And yeah, we stuck with JCE for a good while. It was more than just a project name; it was a reminder of that moment of collective ‘aha’ that set us on our path.”


How did that idea develop?

After we came up with the idea, it sort of took a backseat on our roadmap. At the time, our main focus was on getting our customers up to speed with what edge machine learning is all about, not to mention developing our edge machine learning library. But, as they say, good ideas have a way of coming back around.

Fast forward to spring 2019, we were working on a project with a car manufacturer, sitting in a room full of their AI experts. They were pretty impressed with what we had achieved, but then, as luck would have it, the grumpy guy in the corner – there’s always one, right? – he threw us a curveball: “This is great, but can you do federated learning?” I was taken aback, honestly, because I hadn’t even heard of federated learning at that point. The idea of speeding up learning by combining AI models trained on the edge in the cloud was new to me.

So, I did what any curious mind would do; I promised to look into it. Turns out, Google had just turned all attention to Federated Learning and published one of their today mostly cited papers on the subject. It was like a lightbulb went off – our JCE idea wasn’t just about comparing models in the cloud; we could actually make them learn from each other!

Of course, we quickly realized that to make Federated Learning work for us, especially in the diverse world of industrial IoT, we’d need to take a different approach than Google’s. The challenge was in handling different AI model types and settings, something Google’s vision didn’t quite accommodate.

Then, a stroke of inspiration hit me while revisiting my old PhD thesis. I had explored “rule extraction” extensively, a concept that could potentially solve our issue with heterogeneous models. I can’t dive too deep into the technicalities here, but with our growing team of brilliant minds, we started refining and expanding this concept. And that, in essence, is how we developed what’s now the core learning capability of Synthesis. It’s been quite the adventure, transforming a late-night idea into groundbreaking features that set us apart”

And how did you hear about the funding opportunity with the EIC?

It was Jon who stumbled upon this golden opportunity. He came up to me one day, all excited, and said, “Hey, there’s this EU grant out there for cutting-edge AI projects. It’s insanely competitive, but if we manage to snag this, we could really bring Synthesis to life.” What he conveniently left out at the moment was just how much work drafting that application would be. We poured countless hours and a whole lot of sweat into the process. And yeah, we had to throw our hat in the ring not once, but three times, competing against some 8,000 other ambitious applications.

But, as they say, nothing valuable comes easy, right? Now that we’re on the path to making Synthesis a reality, looking back, all that effort, all those late nights, they feel absolutely worth it. It’s one of those moments where you realize the grind pays off, and it’s paying off in a way that’s making our big vision for Synthesis come to life.

The funding from the EIC hasn’t just helped fund the development of Synthesis, it’s also provided the opportunity to go to CES 2024 recently. Can you tell us more about how AI has been perceived at this year’s largest trade show?

Oh, CES 2024 was a blast, especially with AI stealing the show, and guess what? The big reveal was AI’s move to the edge – something we’ve been pioneering since 2016! This year, though, the spotlight was on generative AI, like those large language models akin to GPT-CHAT. But what really caught my attention was the conversation around why AI is shifting from the cloud to the edge – and the keywords were Security and Privacy, two of the very pillars we’ve been building on for the past eight years.

Now, the tech world is buzzing about bringing these AI features to our laptops with the help of specialized CPUs dubbed NPUs (Neural Processing Units). It’s an interesting turn, considering it somewhat diverges from our approach. We’ve been focused on making AI super efficient without the need for specialized hardware, enabling it to run on just about anything – even legacy equipment.

And for a fun fact, Henrik Linusson, one of our brilliant data scientists, managed to get AI running on a Commodore 64. Talk about taking “legacy” to a whole new level! It just goes to show that with the right know-how, the possibilities of edge AI are boundless, no matter the hardware.

Where is Synthesis now in development?

It’s like we’re in our very own sci-fi movie – Synthesis is truly coming to life! Picture it a bit like Frankenstein’s monster: piece by piece, the core algorithms and services are coming together, and we’re beginning to see the shape of what’s to come. It’s an electrifying phase for us. We’ve got this incredible team, each member bringing something unique to the table, driving the project forward with unmatched zeal.

The excitement isn’t just within the team; our customers are keeping a close eye on our progress, too. Their input has been invaluable, guiding us to ensure Synthesis is not just a collection of cool tech but a solution that delivers tangible benefits. This interaction is proving crucial in steering the development towards real-world applications and customer needs.

While we’re about halfway through our journey, with plenty more ground to cover, the pieces of the puzzle are beginning to fit together seamlessly. The road ahead is still long, but with the progress we’ve made and the team’s relentless drive, it’s hard not to feel optimistic about what’s next for Synthesis.