Thesis Committee:
Prof. Riedel (ISW)
Problem to be investigated:
The Digital Twins of product and production face the challenge that once the development process is closed, they do not reflect the real status of production where events as inaccuracy in the product, equipment failures, poor quality or missing compound parts happen continuously. For achieving production resilience a holistic methodology, combining a top-down with a bottom-up approach for capturing real-time product and production parameters through enabling technologies, as 3D-scanning, intelligent sensors, and then embedding them in Digital Twins should be developed. This capturing and embedding process will result in the development of such called "cognitive Digital Twins". Nevertheless, these Digital Twins often rely on history data which can consist of inaccuracies experience. Additionally, the PhD research project is focusing on enabling resilience in Digital Twins through developing a machine learning-based approach for the Digital Twin to self-learn if there is an inaccuracy in the Digital Twin, and to correct the inaccuracy by bringing the Digital Twin to the accurate state. A motivation scenario for the further validation in an innovative set-up of an automated measurement cell, where state-of-the-art robotics technologies, e.g. stationery, collaborative, mobile components, integrated with 3D laser scanning, intelligent sensors, e.g. temperature, pressure, velocity in three axes, represents the core of demonstration activities. Additionally to the discrete-manufacturing scenario in the measurement cell, the realisation and validation of the multi-layer carbon fiber printing process represent the second demonstrator.
Relevance of the research topic:
In order to implement the Cognitive Digital Twins in the operational manufacturing environment, the project approaches a bottom-up procedure, addressing the theme in two critical manufacturing areas: 1) the product quality assurance in discrete manufacturing, exemplarily for modular production in the automotive industry and 2) the process quality assurance in continuously manufacturing, exemplarily for monitoring and optimising the multi-layer carbon fiber printing process, for the aerospace industry. Both applications face the challenge of giving life in at least near-real-time to the Digital Models of physical manufacturing entities from the factory shop floor, e.g. parts, components, equipment, tools, devices, human workers. The validation of the developed generic approach and methodology for two specific quality assessment scenarios of product and process in selected industries will be instantiated for other production domains and industries. The development of a generic approach for Real-Time Digital Twins in manufacturing is followed by developing a Road Map for migration of this generic approach in other industries and use cases, e.g. machine tool/equipment industry and processes, e.g. logistics, machining, etc.
Scientific objectives:
To conceive, develop and validate the Cognitive Digital Twins aiming at supporting the realisation of resilient production/factory the following scientific and technical objectives have been established:
Objective #1: Design and development of the Reference Models for Resilient production. In product, process and factory planning, reference models exist for the product, process and production life cycle in which resilience aspects have so far been insufficiently taken into account. The aim of objective #1 is to find out how the existing reference models can be enhanced to include resilience indicators and that enhenced reference models are available for the addressed use cases. This will make it possible to evaluate and optimise production holistically from the point of view of resilience feature/characteristics. Additionally, specific KPIs for measuring the performance of the process optimisation and resilience achievement will be developed.
Objective #2: Methodology for the implementation of the Reference Model in a Cognitive Digital Twin and a virtual engineering environment. The Reference Model form the basis for the enhancement and implementation of a newly designed engineering environment based on state-of-the-art digital manufacturing technologies, e.g. Siemens, Dassault Systems. This new engineering environment has to feature the following characteristics: open, expandable, service-based and safety-oriented. The Digital Twins of all factory objects are extended by the captured context from the real-time shop floor; supported 3D scanning, wireless intelligent sensor and digital manufacturing technologies. The achieved cognitive status of digital twins enables the realisation of resilience as a balance between robustness and flexibility.
Objective #3: Design and development of a Cognitive Digital Twin-centered learning assistance system towards resilient Production. The aim is to develop a learning and context-aware assistance system as the main enabler for achieving resilient production. The process flow of this new system starts with the creation of the Digital Twins of all factory objects; capturing real-time data from the shop floor, adding cognition to the digital twin, based on the developed methodology in Objective #2; analysing the current data with history data based on AI and deep learning algorithms; elaborating and documenting actions for resilient processes and for supporting user decision-making.
Objective #4: Development of an approach to self-learn if there is a deviation from accurate asset/process representation in the Digital Twin. It is achieved through the development of a probabilistic risk-based approach to identify where the deviation in the accuracy originates and to automatically understand its data and model sources. Additionally, a machine learning-based approach to self-adapt the digital twin to increase its accuracy of representativeness and a simulation toolkit with machine learning features to optimise the accuracy of the Digital Twin should be developed.
Objective #5: Validation, incremental improvement and Roadmaps for migration of the generic approach and methodology for other manufacturing processes and industries. The achievement of production resilience and the resilience od Digital Twins and the process optimisation in the two developed use cases will be performed based on the identified KPIs in Objective #1. A validation test-bed, scientifically founded will be elaborated. Additionally, the employment of the concept of Cognitive Digital Twins in other manufacturing processes and industries will be developed, as well.
Thesis Committee:
Prof. Hölzle (IAT)
Problem to be investigated:
In the field of production and product development, specific challenges arise in relation to collaboration and communication in remote collaboration. Virtual or augmented reality systems (AR/VR, often summarised under the term metaverse) have great potential here, supplemented by generative AI applications. Although specific VR software tools exist, these are often isolated solutions that only focus on the specific use case and do not offer comprehensive support for creative processes. This discrepancy between currently available technologies and the actual requirements in practice leads to sub-optimal utilisation of the potential of digital tools in the development and innovation process. There is a lack of integrated virtual tools that comprehensively and holistically support and promote creative and collaborative processes in distributed teams.
Relevance of the research topic:
The increasing spread of remote work is leading to a radical change in the world of work. In particular, the understanding and practice of collaboration has changed fundamentally due to the possibility of working from any location. Innovation and product development in particular are characterised by a high degree of interdisciplinary collaboration, as complex problems can only be solved through the interaction of different disciplines. In this context, creativity plays a central role in developing innovative solutions and thus securing competitive advantages. To support creative processes, the integration of digital technologies such as generative artificial intelligence (AI) or VR/AR technologies is rapidly gaining in importance.
In order to enable creative processes in this environment, there is a considerable need for research into a technical and procedural design that overcomes the limitations of today's widespread solutions.
Scientific objectives:
The central question of this research work is how the co-operation between physical and virtual actors in the product development process must be designed with a special focus on creativity and collaboration. To answer this question, the following aspects will be analysed:
1. what features must a tool have to efficiently support communication, collaboration and creativity even in remote settings?
2. how must a user interface for remote settings be designed that offers both verbal and non-verbal interaction options and at the same time supports direct spatial collaboration?
3. how can generative artificial intelligence functions be optimally made accessible and used within the tool in a user-centred manner?
4. what can a technical system architecture look like that maps these functionalities and can be seamlessly integrated into existing development processes at the same time?
5. which criteria and metrics can be used to evaluate the tool, in particular with regard to visual quality (environment and avatars) and the quality of interaction (e.g. latency)?
This research aims to develop an innovative solution that meets the requirements of modern working environments while fostering creative collaboration. The research will focus on how technical systems and design principles can be combined and implemented to create an effective and user-friendly tool for distributed engineering teams.
Thesis Committee:
Prof. Mehring (IMVT)
Problem to be investigated:
Relevance of the research topic:
Scientific objectives:
Thesis Committee:
Prof. Graf (IFSW), Prof. Rademacher (INT)
Problem to be investigated:
Hollow-core fibers have been attracting increasing attention over the past decade for a wide range of applications, from telecommunication, where fiber lengths of tens or even hundreds of kilometers are required, to the delivery of high-power pulsed laser radiation, both in fundamental and multimode operation, where only a few meters (typically 10–20 m) suffice for most laser-based applications. Additionally, they are gaining interest in quantum technology such as quantum communication, quantum sensing, and precision metrology. Recently, it was reported that inhibited-coupling hollow-core double-nested fibers (IC-HCFs) exhibit confinement losses as low as 0.08 ± 0.03 dB/km at a wavelength of 1550 nm, the lowest attenuation ever achieved in an optical fiber, according to the authors [https://opg.optica.org/abstract.cfm?uri=OFC-2024-Th4A.8]. This makes IC-HCFs highly promising for a broad range of applications. However, their fabrication remains complex, requiring in-depth investigations across the entire development process. A multidisciplinary approach, encompassing thermodynamics, material science, optics, and laser physics, is essential to optimize their design, manufacturing, and qualification.
Relevance of the research topic:
Photonics is a key enabling technology in many industrial branches including communication, manufacturing, computing, health care and many more. Photonic technologies are therefore also of relevance for the GSaME, be it in the field of data transmission, sensing, metrology, diagnostics or even directly in form of the laser beam as a manufacturing tool. In all these fields, fiber-optic beam delivery is an essential approach to enhance performance and flexibilty for a given application. While the specific requirements that the fibers need to satisfy are as diverse as the potential applications, the scientific challenges basically always boil down to tasks such as cutsomizing the guided modes, reducing the losses and the sensitivity to bending, controlling dispersion, and - last but not least - advance the manuacturing techniques to be able to reproducibly produce the desired fibers.
Scientific objectives:
Current IC-HCFs are primarily designed for efficient guidance of fundamental-mode radiation, limiting their applicability for multimode laser beam transmission. The work planned in the present project aims at developing comprehensive simulation approaches and design models that account for different aspects such as field distribution, fiber losses, bending sensitiviy, dispersion effects and so on, in order to be able to design and optimize fibers depending on specific applications. Leveraging the fiber production facilities at the IFSW, established fabrication techniques like the stack-and-draw method will be further developed, and new manufacturing approaches will be explored to ensure a reliable and reproducible production process of bespoke fibers.
The fabricated fibers will undergo detailed experimental characterization and their performance will be tested both at the IFSW and the INT.
Thesis Committee:
Prof. Takors (IBVT)
Problem to be investigated:
Die Errichtung einer zirkulären Wirtschaft (circular bioeconomy) ist ein wichtiger Baustein hin zu einer nachhaltigen Wirtschaftsform, die lokale Ressourcen favorisiert und damit die Menschen-verursachten Auswirkungen auf den Klimawandel minimiert. Bioverfahrenstechnischen Ansätzen, die auf nachwachsenden Rohstoffen beruhen oder beispielsweise CO2-haltige Abgase in Wertstoffe umsetzen, kommt dabei eine besondere Rolle zu. Sie können in großvolumigen Produktionsprozessen (>> 100 m³) z.B. Grund- und Feinchemikalien nachhaltig herstellen, müssen aber gleichzeitig gegenüber den etablierten Produktionsverfahren aus fossilen Rohstoffen ökonomisch bestehen. Letzteres ist besonders anspruchsvoll, da die heutigen fossilen Produktionsverfahren das Ergebnis teils 100jähriger Verfahrensoptimierung sind. Neue bioverfahrenstechnische Ansätze müssen daher optimal ausgelegt und betrieben werden, um erfolgreich in Konkurrenz zu dem fossilen status quo zu treten.
Die in diesem Vorgaben favorisieren Blasensäulen-Bioreaktoren bieten ein großes Potenzial biotechnische Prozesse im großen Maßstab zu etablieren. Gegenüber den bislang favorisierten Rührkessel-Reaktoren besitzen sie den intrinsischen Vorteil geringere Aufwendungen für Installation und Betrieb zu benötigen, was die Produktionsprozesse dann ökonomisch konkurrenzfähiger macht. Gleichzeitig existiert der Nachteil, dass das grundlegende Knowhow zur Auslegung und zum Betrieb solcher Bioreaktor-Blasensäulen im >>100 m³ Maßstab leider noch zu fragmentiert ist. In der industriellen Praxis sind bislang nur sehr wenige Beispiele etabliert, die das Ergebnis empirischer Tests sind. Eine stringente ad initio, in silico Methode zur Auslegung und zum erfolgreichen großvolumigen Betrieb derartiger Blasensäulen fehlt.
Relevance of the research topic:
Zuletzt hat die Europäische Kommission im März 2024 explizit festgestellt, dass das 'Biomanufacturing' ein maßgeblicher Baustein zur Errichtung einer resilienten, nachhaltigen Produktion in Europa ist. In diesem Zusammenhang ist es essenziell, großvolumige industrielle Prozesse zur Herstellung z.B. von Grund- und Feinchemikalien, aber auch von Lebensmitteln bzw. -zusätzen zu etablieren. In einem vom Antragsteller mit verfassten und verantworteten Übersichtsartikel (Puiman et al., Current Opionion Biotechnology, in review) wird explizit die Bedeutung der sogenannten 'Gas Fermentation' mit Blasensäulen-Bioreaktoren hervorgehoben. Es werden die noch ungelösten Probleme zur modellierungstechnischen Auslegung dieser Reaktortypen aufgezeigt. Diese umfassen beispielsweise die Modellierung von Stoffaustausch, der Blasen-Fluid Interaktion, der Blasenpopulationsdynamik sowie die sehr hohe notwendige Rechenleistung, die bestenfalls die Simulation von 'snap shots' (Blitzlichtaufnahme) aber nicht von Prozessabläufen erlaubt.
Scientific objectives:
Um die Auslegung und den Betrieb von Blasensäulen-Bioreaktoren ingenieurwissenschaftlich fundiert durchführen zu können, bedarf es neuartiger Simulationsansätze, die die großvolumigen (>>100 m³) Bedingungen mit möglichst großer Genauigkeit nicht nur als 'Blitzaufnahme' (snap shot) sondern im zeitlichen Verlauf einer Fermentation beschreiben.
Die aktuellen Euler-Euler bzw. Euler-Langrange Ansätze basieren auf aufwändigen mathematischen Methoden (z.B. Reynolds-Average Navier-Stokes (RANS) Gleichungen bzw. Lattice-Boltzmann (LB) Methoden), die auch in kommerzieller Software wie ANSYS Fluent bzw. MStar zugänglich sind. Gerade letztere (Euler-Lagrange-Ansätze realisiert als LB in MStar) werden in der Arbeitsgruppe des Antragsstellers häufig zur Simulation von Hydrodynamik, Stoffübergang und mikrobieller Kinetik in Bioreaktoren eingesetzt. Dabei zeigt sich, dass die Berücksichtigung von Blasen als Lagrange 'Partikel' inkl. der dazugehörigen Blasenpopulationsdynamik essenziell ist, um deren Interaktion mit dem Fluid und den resultierenden Stoffübergang genau zu beschreiben. Allerdings sind die bislang verwendeten Ansätze bestenfalls nur in der Lage, einen singulären 'Arbeitspunkt' als 'quasi' steady-state zu nach rechen-intensiven Simulationen auf GPUs (Dauer: mehrere Tage bis Wochen) zu beschreiben.
Davon leitet sich die wissenschaftliche Fragestellung ab, inwieweit LB Simulationen durch die Interaktion mit physically informed neural networks (PINNs) beschleunigt werden können. Dadurch würde sich erstmals die Tür zur Simulation kompletter Fermentationsprozesse in großvolumigen Bioreaktoren unter Verwendung von Euler-Lagrange Ansätzen öffnen.
Konkret sollen über die in MStar implementierte Python Pre APIs Schnittstelle stationäre Lösungen aus einer separaten Python-Umgebung in MStar eingekoppelt werden. In einem ersten Schritt sollte es sich um stationäre Blasenverteilungen inkl. der dazugehörigen Geschwindigkeitsvektoren handeln. Auf dieser Basis sollte dann der in MStar implementierte LB Algorithmus in die Lage versetzt werden, die dazugehörigen Kräftebilanzen beschleunigt zu lösen, um dadurch wesentlich schneller zu pseudo-stationären Lösungen im Euler-Langrange Feld zu gelangen.
Die stationäre Blasenverteilung und inkl. der dazugehörigen Geschwindigkeitsvektoren soll über ein PINN separat in Python erfolgen. Das PINN ist das Ergebnis von reinforcement learning basierend auf Eingabedaten bekannter Simulationen und deren Lösungen. Zu diesem Zweck werden (i) eigene bereits vorhandene Simulationsergebnisse verwendet, (ii) neue Simulationen gezielt für verschiedene operative Konditionen durchgeführt und (iii) Literaturdaten eingepflegt. Diese in Lern- und Testdatensätze separierte Daten sollen zur Identifizierung des PINN führen. Ziel des PINN ist es, anfängliche Blasenverteilungen in einem Blasensäulen-Bioreaktor in die stationären Blasenverteilung für charakteristische Betriebsmodi zu überführen. Dadurch wird die aufwändige konvergierende Simulation in LB-Modus umgangen.
Darauf aufbauend soll in einem nächsten Schritt untersucht werden, inwieweit das PINN auch zur Vorhersage von pseudo-stationären Strömungsfelder des fluids verwendet werden kann. Die Python Schnittstelle erlaubt ebenfalls den Austausch von Volumenelementen und Geschwindigkeitsvektoren, wodurch komplette Strömungsfelder über PINN prinzipiell vorhergesagt werden können.
Gelingt die erfolgreiche Ankopplung der PINNs wird die Zeit zur Simulation operativer Bedingungen in Blasensäulen-Bioreaktoren stark reduziert. Dadurch können Sequenzen aufeinanderfolgender Zustände erstmals im Detail untersucht werden. Dies eröffnet beispielsweise die Möglichkeit, den Einfluss von Einbauten oder von außen vorgenommener operativer Maßnahmen auf die Leistungsfähigkeit des Bioreaktors - und damit des gesamten Bioprozesses - quantitativ zu beschreiben. Dies ist mit den bisherigen Methoden nicht möglicht. Der vorgestellte Ansatz öffnet daher die Tür zur neuartigen Auslegung und Bewertung von Blasensäulen-Bioreaktoren im realen großvolumigen Betrieb.