Logo banner with the headline “Sciencepreneurtalk” in red lettering and the name “Ladyna Wittscher” highlighted in red underneath.

From Lab to Learning Systems

Published


Written by
Bianca Cramer

Artificial intelligence is often presented as either a promise of limitless efficiency or a looming existential risk. Somewhere between hype and fear, many people struggle to understand what today’s AI systems can do, where their limits lie, and how much trust they deserve. For Ladyna Wittscher, this gap between perception and reality is at the heart of her work. Her academic journey did not begin in computer science, but in chemistry and environmental engineering, a field defined by physical materials, laboratory routines, and controlled experiments. Over time, however, Ladyna began asking different questions. Could experiments be made more efficient through simulations? What happens when learning itself becomes the object of scientific inquiry? And how reliable are systems that learn from data rather than fixed rules?

Today, Ladyna is pursuing a PhD on robust and reliable AI systems, focusing on self-supervised learning and image recognition. Alongside her research, she develops educational programmes that make AI tangible, critically examine its limitations, and empower people to engage with the technology responsibly. Her work bridges research and society, grounded in the conviction that understanding must keep pace with technological progress.

You originally started in chemistry and environmental engineering before moving into AI. Can you walk us through how you “slipped into” this field and realised that the lab was not the place you wanted to stay?

During my master’s thesis in electrochemical energy storage, I spent a lot of time in the laboratory and realised that I wanted to move beyond repetitive experiments. I had already used simulations during my bachelor’s thesis and learned programming during my master’s programme, but I wanted to understand these methods more systematically. That curiosity eventually led me to pursue a second master’s degree in business informatics, where I became particularly enthusiastic about neural networks and self-supervised learning. What fascinated me was their potential to solve complex tasks across very different domains. Looking back, it was not a sudden switch, but rather a gradual shift from working primarily with physical systems in the lab towards exploring learning systems that operate on data.

What sparked your fascination with neural networks and self-supervised learning during your master’s studies?

Neural networks fascinated me because they enable machine learning without relying on fixed, preprogrammed rules. Instead of being told explicitly what to do, they learn patterns from data. I wanted to understand how that works in detail. It amazed me that mathematically simple artificial neurons can be connected into very large networks that form the basis of today’s AI systems. Even though these systems are built entirely on statistics, optimisation, and large amounts of data, they often appear intelligent. That tension raises fundamental questions about what intelligence means. Self-supervised learning extends this idea even further, because the model generates its own training signals directly from the data. This makes it possible to learn from domains where labelled data is rare or difficult to obtain, such as materials science, toxicology, or medicine. The breadth of applications this enables still fascinates me today.

At what point did you realise that you wanted to pursue a PhD in AI rather than stay in chemistry?

Initially, I did not plan to pursue a PhD in AI at all. My goal was simply to learn more about programming and combine it with my background in chemistry. Over time, however, I became increasingly interested in neural networks and self-supervised learning and eventually wrote my master’s thesis on the topic.

During that process, I realised how dynamic AI research is and how much remains unexplored. The speed of progress and the openness of the research questions motivated me to continue learning. At the same time, I genuinely enjoyed doing research, asking questions, and systematically exploring them. That combination ultimately led me to pursue a PhD.

Your PhD focuses on self-supervised learning and image recognition. How would you explain this area of research to someone without a technical background?

Neural networks need very large amounts of data to learn. Traditionally, this data is labelled manually by humans, which is time-consuming and expensive. Self-supervised learning takes a different approach by creating tasks in which the model generates its own labels. For text, this might involve predicting a missing word in a sentence. For images, the model might reconstruct a missing part of an image or predict the degree of rotation of an image.

My own research examines whether self-supervised learning can improve robustness in image recognition, meaning e.g. how well models cope with noisy, incomplete, or unbalanced data. I also study hyperparameter sensitivity, that is, how training conditions influence model behaviour and what they learn. Hyperparameters cannot be learned by the model itself and therefore must be specified by the developer, such as the number of layers in a neural network or the duration of training. Consequently, they represent a form of human influence on the training process. In my view, this is not sufficiently addressed in research.

What have you learned about model behaviour, hyperparameter sensitivity, and bias that you believe more people should understand?

AI systems are fundamentally based on probability and human-generated data. Their behaviour is shaped by hyperparameters and other human decisions. As a result, they tend to produce the most statistically likely and socially reinforced answer. That answer is often correct, but it is not guaranteed. Since AI systems inherit biases from their training data, they cannot be considered objective or neutral.

High performance can mask problems related to fairness or robustness. AI captures statistical regularities in data, but it does not understand meaning in the human sense. It is sensitive to how questions are framed and often produces overly agreeable answers. This lack of transparency and explainability becomes particularly problematic when AI systems are involved in decision-making processes.

You are very focused on what is currently scientifically grounded in AI. Why is it important to avoid over-speculation when discussing the future of AI?

AI is already deeply embedded in our everyday lives, at work, at school, and in private contexts. This creates very concrete challenges, such as discrimination, hallucinations, data protection issues, deepfakes, copyright questions, deskilling and the concentration of power among large technology companies.

For me, these real and present issues mustn’t be overshadowed by highly speculative debates about distant futures. A solid scientific understanding of how AI works helps draw attention to these problems and supports the development of realistic solutions. We are currently in the middle of a major transformation, and it is crucial to remember that we still can shape it.

In your workshops, you make AI experiential by showing examples like hallucinations, bias, or jailbreaking. Why is hands-on experience so important for understanding AI risks?

Hands-on experiences are essential for sustainable learning. People are far more likely to remember and internalise knowledge when they arrive at insights themselves rather than being told abstractly. In my workshops, I want participants to experience how AI behaves in practice. This combination of specialist knowledge and direct interaction helps people develop a realistic understanding of what models can and cannot do. The goal is not to create fear, but to empower participants, encourage critical thinking, and build genuine AI competence.

The goal is not to create fear, but to empower participants, encourage critical thinking, and build genuine AI competence.
What are the most common misconceptions you encounter among people with little prior exposure to AI?

One very common misconception is that AI works like a search engine. In reality, AI systems function very differently and are often not suitable for classical search engine tasks such as reliably researching news. This misunderstanding can lead to unrealistic expectations and ineffective use of the technology, preventing people from fully exploiting its potential.

Bias and discrimination in models are recurring themes in your work. Is there one example that has particularly stayed with you?

Although AI models have improved in recent years, they will likely never be completely free of bias. One experiment I conducted during a workshop illustrated this very clearly. I asked participants to instruct an AI to write a short story about a person who selflessly cares for someone else. In the vast majority of cases, the AI assigned a female first name to the caregiver. This simple exercise showed how deeply social stereotypes are embedded in training data and how they can be reproduced or even reinforced by AI systems.

Your business idea is not tied to a single product but focuses on AI education. What motivated you to choose this path?

I believe AI education is particularly important at this moment. Even the most advanced AI tools are only useful if people integrate them meaningfully into their everyday work and decision-making processes. That requires at least a basic understanding of how the systems function, where their weaknesses lie, and what opportunities they offer. We need responsible users who actively shape this technological transformation rather than passively adapting to it. An informed public discourse is only possible if people understand both the potential and the risks. Therefore, education is essential, and I see a great deal of work still to be done in this area.

You work with very diverse target groups. What do you value most about this variety?

I really enjoy the diversity of perspectives. When people ask questions from very different backgrounds, it enriches the discussion and often challenges me to think about AI in new ways. It is important to me that both people who have never used AI and those who use it daily gain something from my workshops.

My aim is not only to convey scientific facts but to enable learning through personal experience, always tailored to each individual’s professional and personal context.

How do your scientific insights influence your workshops, even if your PhD results themselves are not part of the curriculum?

My research keeps me closely connected to the rapid developments in AI. This ensures that my workshops remain scientifically grounded and up to date. A strong focus of my research is understanding model behaviour and reliability, and I believe this knowledge is essential for assessing when AI can be trusted and when it should be approached with caution. In addition, I am very interested in studies on the social impact of AI, which allows me to integrate broader societal perspectives into my educational work.

What gap in AI knowledge do you currently see in society, and how do your workshops address it?

There is often a noticeable gap between scientific research and public understanding. As a scientist, I see it as my responsibility to help bridge that gap. It is important to me that my workshops are both science-based and accessible. I also observe that attitudes towards AI are often polarised. AI is neither a universal solution nor an inevitable catastrophe. It is a powerful tool that can exacerbate existing problems such as discrimination or misinformation. My workshops, therefore, combine theoretical knowledge with practical exercises, giving participants the space to develop nuanced, informed opinions.

AI is neither a universal solution nor an inevitable catastrophe. It is a powerful tool that can exacerbate existing problems such as discrimination or misinformation.
You are intentionally taking a slow and low-risk approach to building your educational offering. What advantages does this bring?

This approach allows me to focus on quality and depth. I have the time to test different formats with diverse audiences, incorporate feedback, and continuously improve my workshops. While this path may not suit every idea or personality, it has helped me minimise obstacles and build something sustainable from the start.

How do you balance the demands of a PhD with running workshops and consulting?

It is not always easy and can involve a heavy workload. However, the two areas complement each other very well. Working with people in workshops gives me energy and inspiration, which in turn motivates me in my scientific work.

What are the biggest overlaps between your academic work and your educational work?

I have been teaching at the university since the beginning of my PhD, so I already had considerable experience in education. Working with different target groups has further sharpened my teaching skills, especially with regard to practical application. At the same time, I use different AI systems myself, which gives me hands-on experience that I can directly pass on in my workshops. Both areas strongly reinforce each other.

What made you join the Top-Talents-Track at this early stage of your venture?

I wanted to use the opportunity to learn, exchange ideas with others, and further develop my innovative thinking. Especially at an early stage, the programme offers valuable networking opportunities and support.

Has the programme influenced how you think about your idea or your personal development?

Yes, very much so. The programme encouraged me to reflect on the entrepreneurial mindset and reinforced my ideas. I gained both motivation and concrete knowledge, and those impulses continue to shape my work.

Which aspects of the programme have been most meaningful so far?

Meeting inspiring people and hearing founders speak openly about their challenges has been particularly valuable. I also appreciated the visit to the Falling Walls Science Summit, which brings science and business together in a unique way. In addition, the coaching sessions provided important moments for self-reflection.

You aim to empower people to navigate the societal shift caused by AI with confidence. What does that vision look like over the next five years?

I would like to develop a broader portfolio of workshops tailored to different professional fields and target groups. My goal is to cover all dimensions of AI literacy, including knowledge, practical skills and values.

Where do you see the most urgent need for responsible AI education today?

Everyone who works with AI tools should receive proper training. If companies fail to provide training and reliable AI tools, there is a high risk that unregulated shadow AI tools will be used. Teachers and parents also play a crucial role, as children and young people are constantly exposed to AI-generated content on social media.

How should society approach growing fears around AI, especially among non-technical audiences?

Fears should always be taken seriously, as they are very real for those experiencing them. Education can help people better understand what AI is today and what it is not. At the same time, we should focus on developing fair and transparent AI systems and addressing the concrete problems that already exist.

Curiosity and persistence drive your research. How do these qualities shape your everyday work?

Curiosity is essential in a field that evolves as quickly as AI. It keeps me engaged and motivated. Persistence is equally important because research is a long process with setbacks. Both qualities are values I also want to convey to others: staying open to learning while keeping pace with technological change.

What helps you stay grounded when research becomes overwhelming?

Reminding myself why I am doing my PhD helps a great deal. It is about learning and contributing, even in small ways, to an important field. Talking to other PhD students, as well as spending time with friends and family, provides balance.

How does hiking with your dog, Luna, contribute to creativity and balance in your life?

Hiking allows me to be present and active, and Luna is a wonderful companion. Being out in nature with her helps clear my head. I think we can learn a lot from dogs: taking breaks, staying curious, and appreciating small, beautiful moments.

I think we can learn a lot from dogs: taking breaks, staying curious, and appreciating small, beautiful moments.
Finally, what advice would you give to people who want to understand AI more deeply or even start teaching it themselves?

Stay curious and think critically. Even if it feels complex at first, it is worth investing the time to understand how AI systems work. Apply AI in a domain you know well to see where it performs well and where its limits lie. Rather than trying to understand everything at once, focus on the applications that matter to you.

What happens next?

Check out our event calendar for upcoming workshops. Stay tuned for more updates, opportunities, and success stories!
Connect with Ladyna via LinkedIn!

Interviewer & Editor, Design: Bianca Cramer

This might also interest you

Logo banner with the headline “Sciencepreneurtalk” in red lettering and the name “Svenja Gallasch” highlighted in red underneath.

Bringing Evidence into Menstrual Health

For decades, menstrual health has been shaped by fragmented knowledge, delayed diagnoses, and structural blind spots in ... Read more
Logo banner with the headline “Sciencepreneurtalk” in red lettering and the name “Vishu Gupta” highlighted in red underneath.

Reimagining Science Education

Imagine a world where every student, regardless of their school or city, could step into a fully equipped science lab ... Read more
Logo banner with the headline “Sciencepreneurtalk” in red lettering and the name “Hanna Sänger & Dr. Kristin Vielberg” highlighted in red underneath.

Healthy Minds, Healthier Research

What does it take to make academia a healthier place? For many researchers, the PhD journey is as much a personal ... Read more
Sponsored by
Academics Logo
Boehringer-Ingelheim Logo
ZEIT Logo Verlagsgruppe
Logo: Federal Ministry of Research, Technology and Space