Thesis Committee:
Prof. Bauernhansl (IFF)
Problem to be investigated:
The design and modeling of exoskeletons has so far followed a rigid methodology in the development process. However, an exoskeleton is designed for its specific use based on the user characteristics. The IFF uses game engines as powerful platforms for the modeling, simulation and visualization of modular mobile production systems. The models developed and integrated in a game engine allow to predict and configure the factory in a dynamically adaptable manner. On the other hand, human modeling was previously developed with special software such as AnyBody Modeling System™, which allows limited direct compatibility with the simulation of other production modules. It is therefore necessary to examine the requirements and methods for developing, personalizing and integrating a human model for simulation and visualization in the context of a modular and mobile factory. In addition, the accuracy of a model or representation needs to be discussed and determined by considering the implications for human and system safety as well as system functionality. An exoskeleton can be worn only by users with similar bodily properties or dimensions in order to ensure its ergonomic, safe and functionally adequate use. As a result, the continuous design based on virtual modeling and simulation of an exoskeleton takes long development times and leads to inflexible results so far.
Relevance of the research topic:
Game engines continuously improve their physical simulations, making them potential development environments for simulations in various research areas, in this case for individual modeling of humans, including the behavior of the musculoskeletal system and interaction with other production modules. The complexity of human modeling is seen as particularly challenging in the factory, as every human being has very different, individual characteristics. Using a dynamic musculoskeletal model approach in an environment compatible with other factory systems, realistic movements and forces can be determined using three-dimensional human and machine geometries. In this way, it is possible to calculate and even predict and check whether the system fulfills the functional, safety-related and ergonomic requirements for employees. The aim is to reduce development times for exoskeleton prototypes and to increase flexibility in adapting and scaling an exoskeleton and the model. In the past, inverse kinematic or forward kinetic algorithms have been used for motion analysis and control. This project requires to analyze and validate the Forward and Backward Reaching Inverse Kinematics (FABRIK) algorithm developed by the University of Cambridge and implement an adapted version as a solution for human and skeleton kinematics.
Scientific objectives:
▪ How can a personalizable human model and its dynamics be modeled, simulated and visualized in the context of the modular, mobilisable factory?
▪ How must modular and scalable exoskeletons be designed and modeled?
▪ What are the requirements for game-engine-based modeling of a human musculoskeletal system in order to model, visualize and predict its interactions with the environment and with other systems?
Thesis Committee:
Prof. Bauernhansl (IFF), Prof. Huber (IFF)
Problem to be investigated:
Metamorphic value creation systems consist of a large number of hardware and software modules that are put together to make machines based on orders and can disintegrate again after the task has been completed. The aim of these production systems is to increase the degree of freedom in the system in order to increase flexibility and efficiency. In addition, the so-called vertical added value is made possible. Vertical added value describes the increase in value of the production system through the use of fungible learning outcomes based on the acquisition of production-related information and data that can be obtained in the context of conventional (horizontal) production processes. Due to the frequent reassembly of the modules, quick coupling and decoupling of the modules is desirable for reasons of added value. A wireless ICT interface enables the conversion times to be reduced. Due to the large number of modules as well as the variety and amount of data and information to be transmitted to enable the learning processes necessary for vertical value creation and the necessary safeguarding of error-free and secure data transmission, interfaces are needed that meet production requirements in terms of security and freedom from errors, have not negative influence on other modules, connect to a large number of modules and at the same time transfer a sufficient amount of data.
Relevance of the research topic:
There are currently various approaches to modular interfaces such as Plug'n Produce. Wireless connections are also being explored with 5G and NFC. However, the existing approaches do not sufficiently fulfill the machine learning requirements for transmitting control information. With regard to metamorphic production systems, this question represents a crucial basis.
Scientific objectives:
▪ What are the requirements for ad-hoc, wireless interfaces for metamorphic production?
▪ Which data and information are to be transferred in the context of vertical value creation?
▪ What are the general conditions with regard to safety and security?
▪ Which methodology and technologies are suitable for efficiently transmitting data and information of the modular factory wirelessly?
Thesis Committee:
Prof. Bauernhansl (IFF)
Problem to be investigated:
Exoskeletons in therapeutic use can improve human motor function by assisting movement through complementary joint moments, thereby enabling a greater range of motion. Studies show that great therapeutic effects can be achieved with assistive exoskeletons in the rehabilitation of stroke patients with motor disabilities. For effective therapy, the exoskeleton would need to run with adaptive force support and optimally adjust to muscle force changes in real time so that the flaccid muscles are efficiently trained with/without stimulation by arm movements. If too much support is provided, the training effect would diminish. This optimization problem of controlling the support output of an exoskeleton must be done with the muscular force output of the dysfunctional joint. A personalized neuromuscular adaptive real-time adjustment of the exoskeleton to the joint function of the paralyzed arm is currently not possible because there is no physically based patient-specific muscle force determination from biomechanical simulations that can realistically determine the dynamic joint resistance in real time from all muscle forces involved in the joint in a three-dimensional resolution.
Relevance of the research topic:
The currently used MBS software for human modeling and the interaction with the exoskeleton simulated with it are based on highly idealized 1-D models and cannot represent the actual reality of the 3D problem of the musculoskeletal system. Due to the strong reduction of the model complexity from 3D to 1D, the calculated muscle forces from 1D-MBS models deviate strongly from reality and are therefore not applicable for personalized configuration of exoskeletons. The 1D muscle models are not able to realistically represent the complex muscle tissue with its anisotropic properties and complex 3D geometries because the muscle is a nonlinear continuum mechanics problem. The path of the muscle between the fixation regions on the bone segments and its shape are also very essential for the mobilization of the muscle force and joint motion, but this is only very rudimentarily approximated 1D and linearly by the MBS method. However, joint movements and muscle forces can be realistically determined by 3D-FE forward dynamic musculoskeletal modeling approaches with realistic muscle geometries. Thus, it would be possible to fit the dysfunctional arm of the stroke patient at any time to determine the required support in real time. Thereupon, the exoskeleton can be dynamically adaptively controlled. This personalized exo-human coupling would not be possible with 1D-MBS methods.
Scientific objectives:
FEM-based simulation of neuromuscular disease of the arm induced by stroke involves both a methodological and a software engineering challenge. To date, there is no published 3D-FE simulation of complex muscle-joint systems with a forward dynamics approach, where motion follows through regulated muscle activation. However, such 3D simulations would be essential for the arm to analyze the interaction with the exoskeleton and regulate its function based on it. The realistic solution of the biomechanical simulation problem of the elbow joint with a 3D-multi-muscle system requires knowledge in the physiology of the arm on the one hand, and in 3D-nonlinear continuum mechanics on the other hand, as well as in efficient optimization methods for determining muscle activations and control techniques for configuring the exoskeleton for the arm support. First of all, the muscular dysfunction of the arm has to be determined in a patient-specific manner, which is methodologically divided into two steps: The first step is to determine the pre-stretched muscle system at the joint (passive stiffness of the elbow joint). The pre-stretching of the muscles is relevant in two aspects. First, it determines the static joint stiffness and second, the maximum mobilizable active muscle force. Since muscle pre-stretching does not change after stroke, the passive stiffness properties of the muscles are not affected by the disease and can be determined using the physiologically possible RoM as in a healthy person. The optimization of the pre-stretches will be done by a META model-based method. In the second step, the muscle activities for the muscular dysfunction of the paralyzed arm will be optimized by a multi-dimensional META-model approach. Changes in muscle volumes due to atrophy also affect the passive stiffness of the muscles and thus force development. Ultrasound measurements should be able to detect these muscle volume changes and recalculate muscle pre-stretches using the META-models. The third and final step is to establish the bio-structural-mechanical coupling of the patient-specific adapted simulation model of the arm with the exoskeleton to simulate the interaction. For the subsequent sensitivity study, depending on the number of design parameters, large amount of simulations need to be efficiently planned and performed on a computational cluster e.g. at HLRS. The goal would be to determine prevailing muscle forces for the current state through recordings of EMG data from the muscles and the position of the moving arm, which are used as input parameters in the META-models, in order to calculate and readjust the required therapeutic support for the exoskeleton in real time.
Thesis Committee:
Prof. Schuster (BWI)
Problem to be investigated:
Smart contracts in blockchain networks expand the possibilities for contracting. Via the consensus mechanism of the blockchain, conditions that are otherwise not verifiable for market participants become transparent. For example, the successful sale of a final product can also be traced by intermediaries and suppliers. This makes it possible to reference this transaction in contracts and, for example, link a payment to it. On the one hand, the broader information base makes it easier for new players to enter the market by reducing the barriers to entry. On the other hand, however, there is also the risk that it encourages collusion, because secret agreements between competitors can be more easily controlled by the involved parties. For an analysis of this trade-off, see Cong and He (2019). For an introductory overview of smart contracts, see also Schuster, Theissen, and Uhrig-Homburg (2020).
Relevance of the research topic:
Smart contracts have not yet been used on a broad basis in supplier networks. One reason for this is that the major players (e.g., large automobile manufacturers) do not want to give up their information monopoly because the consequences of this are not fully understood. The proposed research topic aims to find out how competition in such networks could look like and under which conditions smart contracts would result in advantages for particular market participants.
Scientific objectives:
In a first step, based on the economic model of Cong and He (2019), different use cases for the application of decentralized blockchain-based smart contracts in supplier networks will be developed. Here, one focus is on the regulatory framework of different industries (e.g., automotive industry). In a second step, proposals will be developed on how such blockchain networks can be designed to promote competition while reducing the risk of collusion and preserving confidentiality in critical areas. In this context, cooperation with industrial companies may be an option in order to better understand their decision-making and preferences.
References:
Cong, L.W. and Z. He. (2019): Blockchain Disruption in Smart Contracts. Review of Financial Studies, 32, p. 1754-1797.
Schuster, P., E. Theissen, and M. Uhrig-Homburg (2020): Finanzwirtschaftliche Anwendungen der Blockchain-Technologie. Schmalenbachs Zeitschrift für betriebswirtschaftliche Forschung, 72, p. 125-147.
Thesis Committee:
Prof. Mitschang (IPVS), MANN+HUMMEL GmbH
Problem to be investigated:
The advancing digitization of product development processes and the increasing connectivity of systems enable novel types of combinations and analyses of data. In cooperation with an industry partner, the goal of this project is to design, implement, and validate an approach for a data-driven and automated product development process. In this context, a data-driven recommendation system is to be created that suggests parts of a suitable CAD model of the product design for a new customers’ specification. Possible data sources for the recommendation system include historical data on customers' specifications of a requirements management tool, CAD data of completed product development projects, and product classification data of a PLM system. This project takes an interdisciplinary approach to not only address central issues related to data management, business intelligence and Big Data technologies, but also engineering aspects of product development.
Relevance of the research topic:
Exploiting data and experience from existing and completed projects is a strategic competitive advantage in light of fast-moving product lifecycles. Faster and target-oriented data analyses are necessary to meet customer requirements, such as shorter response times and working in an agile manner. At the same time, volatile market conditions require corresponding timeliness of the data that are used for analyses. Due to the increasing pressure for efficiency, it is necessary to make product development processes more resource-efficient and more oriented towards production costs. This may be achieved by considering data-driven, intelligent, i. e., self-controlling and self-optimizing approaches early in CAD modeling. The development of such data-driven methods is a complex and current research topic, which is situated in the intersection of challenges in information technology, business management, and engineering.
Scientific objectives:
Central tasks and questions related to this project are:
• Analysis of the product development process at the industry partner as well as differentiation of the research topic from existing approaches in literature.
• Investigation of existing data sources and data characteristics in the processes and systems of the industry partner as well as in related literature.
• Development and definition of generic data analysis scenarios for a recommendation system for CAD models based on data management and Big Data technologies.
• Identification of generic and methodological requirements from the scenarios, in particular with regard to required data sources, data quality, analysis methods, and implementation technologies.
• Feasibility study of the scenarios based on the requirements and on concrete products and product designs of the industry partner.
• Design of a generic approach for a data-driven recommendation system for CAD models and prototypical implementation of this approach for selected scenarios of the industry partner.
• Validation of the implemented approach in the context of the selected scenarios, e. g. with regard to the correctness and completeness of the CAD models recommended by the data-driven system (e. g., completeness level of 80%).
Thesis Committee:
Prof. Riedel (ISW)
Problem to be investigated:
The Digital Twins of product and production face the challenge that once the development process is closed, they do not reflect the real status of production where events as inaccuracy in the product, equipment failures, poor quality or missing compound parts happen continuously. For achieving production resilience a holistic methodology, combining a top-down with a bottom-up approach for capturing real-time product and production parameters through enabling technologies, as 3D-scanning, intelligent sensors, and then embedding them in Digital Twins should be developed. This capturing and embedding process will result in the development of such called "cognitive Digital Twins". Nevertheless, these Digital Twins often rely on history data which can consist of inaccuracies experience. Additionally, the PhD research project is focusing on enabling resilience in Digital Twins through developing a machine learning-based approach for the Digital Twin to self-learn if there is an inaccuracy in the Digital Twin, and to correct the inaccuracy by bringing the Digital Twin to the accurate state. A motivation scenario for the further validation in an innovative set-up of an automated measurement cell, where state-of-the-art robotics technologies, e.g. stationery, collaborative, mobile components, integrated with 3D laser scanning, intelligent sensors, e.g. temperature, pressure, velocity in three axes, represents the core of demonstration activities. Additionally to the discrete-manufacturing scenario in the measurement cell, the realisation and validation of the multi-layer carbon fiber printing process represent the second demonstrator.
Relevance of the research topic:
In order to implement the Cognitive Digital Twins in the operational manufacturing environment, the project approaches a bottom-up procedure, addressing the theme in two critical manufacturing areas: 1) the product quality assurance in discrete manufacturing, exemplarily for modular production in the automotive industry and 2) the process quality assurance in continuously manufacturing, exemplarily for monitoring and optimising the multi-layer carbon fiber printing process, for the aerospace industry. Both applications face the challenge of giving life in at least near-real-time to the Digital Models of physical manufacturing entities from the factory shop floor, e.g. parts, components, equipment, tools, devices, human workers. The validation of the developed generic approach and methodology for two specific quality assessment scenarios of product and process in selected industries will be instantiated for other production domains and industries. The development of a generic approach for Real-Time Digital Twins in manufacturing is followed by developing a Road Map for migration of this generic approach in other industries and use cases, e.g. machine tool/equipment industry and processes, e.g. logistics, machining, etc.
Scientific objectives:
To conceive, develop and validate the Cognitive Digital Twins aiming at supporting the realisation of resilient production/factory the following scientific and technical objectives have been established:
Objective #1: Design and development of the Reference Models for Resilient production. In product, process and factory planning, reference models exist for the product, process and production life cycle in which resilience aspects have so far been insufficiently taken into account. The aim of objective #1 is to find out how the existing reference models can be enhanced to include resilience indicators and that enhenced reference models are available for the addressed use cases. This will make it possible to evaluate and optimise production holistically from the point of view of resilience feature/characteristics. Additionally, specific KPIs for measuring the performance of the process optimisation and resilience achievement will be developed.
Objective #2: Methodology for the implementation of the Reference Model in a Cognitive Digital Twin and a virtual engineering environment. The Reference Model form the basis for the enhancement and implementation of a newly designed engineering environment based on state-of-the-art digital manufacturing technologies, e.g. Siemens, Dassault Systems. This new engineering environment has to feature the following characteristics: open, expandable, service-based and safety-oriented. The Digital Twins of all factory objects are extended by the captured context from the real-time shop floor; supported 3D scanning, wireless intelligent sensor and digital manufacturing technologies. The achieved cognitive status of digital twins enables the realisation of resilience as a balance between robustness and flexibility.
Objective #3: Design and development of a Cognitive Digital Twin-centered learning assistance system towards resilient Production. The aim is to develop a learning and context-aware assistance system as the main enabler for achieving resilient production. The process flow of this new system starts with the creation of the Digital Twins of all factory objects; capturing real-time data from the shop floor, adding cognition to the digital twin, based on the developed methodology in Objective #2; analysing the current data with history data based on AI and deep learning algorithms; elaborating and documenting actions for resilient processes and for supporting user decision-making.
Objective #4: Development of an approach to self-learn if there is a deviation from accurate asset/process representation in the Digital Twin. It is achieved through the development of a probabilistic risk-based approach to identify where the deviation in the accuracy originates and to automatically understand its data and model sources. Additionally, a machine learning-based approach to self-adapt the digital twin to increase its accuracy of representativeness and a simulation toolkit with machine learning features to optimise the accuracy of the Digital Twin should be developed.
Objective #5: Validation, incremental improvement and Roadmaps for migration of the generic approach and methodology for other manufacturing processes and industries. The achievement of production resilience and the resilience od Digital Twins and the process optimisation in the two developed use cases will be performed based on the identified KPIs in Objective #1. A validation test-bed, scientifically founded will be elaborated. Additionally, the employment of the concept of Cognitive Digital Twins in other manufacturing processes and industries will be developed, as well.
Thesis Committee:
Prof. Riedel (ISW)
Problem to be investigated:
For 3D capturing and the reconstruction of existing buildings, various techniques such as laser scanning, the photogrammetric strucutre from motion method or the structured light method already exist. However, for all mentioned techniques, prior planning steps are required before any acquisition and the capturing process itself always has to be performed manually by human resources. Even after capture, many manual steps are still required in various software applications to get from 3D scan to BIM model. For this reason, scan-to-BIM approaches are still associated with comparatively high costs at this point in time. Laser scanning, the main technology currently used for scan-to-BIM approaches, adds significantly to the already high total costs with much higher acquisition costs compared to other capture methods. Further disadvantages arise when using laser scanning to capture small-scale objects, as these have to be captured through multiple scan positions, which requires much more time and thus causes higher costs compared to other capture methods.
Relevance of the research topic:
Due to the constantly evolving technical progress of hard- and software, new areas of application are constantly emerging for the field of 3D scanning. The increasingly affordable technologies lead to more and more digitization taking place. For example, in the industrial sector, existing factory facilities can be captured in three dimensions with laser scanning, true to detail. But also the technical progress in the field of digital cameras leads to the fact that better and better sensor and technology can be found in the devices, which is why the 3D scan acquisition with the structure from motion method also leads to very good results. The use cases of the captured 3D data can ultimately be very diverse. The high demand for digital models of various objects up to entire existing buildings creates the need for automated acquisition methods in order to save time and costs.
Scientific objectives:
The aim of the research project is to develop an autonomous vehicle that carries out 3D scans as automatically as possible, which are then converted into BIM models. A digital model that is always kept up to date is an essential component for the sensible use of an existing building during its operational phase. On this basis, the system to be developed is to be based on camera-based 3D scanning, as this offers advantages in terms of low acquisition costs, good availability and a partially more time-efficient acquisition process for updating existing buildings compared to laser scans. Prior to the actual scan process, a room plan is to be created by first drive off the room. Depending on the room plan, 360° panoramas will be captured, which, with the help of image recognition algorithms from machine learning, will help to determine navigation routes, camera parameters and number of images for specific room areas. With the help of the vehicle to be developed and the program automations, it will be investigated to what extent the scan-to-BIM process can be automated. The process can be roughly divided into the following steps:
capturing, point cloud generation, classification and derivation into a BIM model.
Thesis Committee:
Prof. Riedel (ISW)
Problem to be investigated:
In an operational manufacturing environment, a high volume of heterogeneous data is continuously collected from production and assembly processes, being generated at all levels of manufacturing, starting with the process, workstations, lines, area and site of production to the production network. To achieve flexibility, modularity, and adaptability, the implementation of production Digital Twins is required. In the current era of Industry 4.0, and towards the establishment on the way to Industry 5.0, a Digital Twin capable to benefit from a constant influx of real-time data, enables the reactiveness of production, to dynamic changes. The implementation of a real-time Digital Twin requires a fully automated data acquisition process based on a homogenous and robust database management system. The main challenge represents a scientifically defined, fully automated method to identify, collect, and process all data from heterogeneous data sources and manage it in a secure and reliable environment. Furthermore, the corresponding data acquisition system must benefit from certain flexibility to adapt to new parameters, allowing the process's improvement. Particularly holistic solutions for data acquisition, respectively volatile data and master data, are required. The reduction of the delay between the time of data acquisition and the update of the Digital Twin represents one of the main sub-objectives.
Relevance of the research topic:
To develop an effective and accurate methodology, describing the implementation of an automated Data Acquisition Process towards Digital Twins in a real manufacturing setting, the project utilizes two similar but distinctive 3D scanning procedures together with the utilization of wireless sensoria technologies. First, a top-down 3D laser scan with fixed, wall-mounted laser scanners around the shop floor and the working cells, to extract overall data regarding the layout and the position of the shop floor elements. Second, a bottom-up flexible 3D laser scan provided by mobile robots and highly accurate, robot arm-mounted laser scanners, for precise point cloud data acquisition describing the geometry and position of high importance shop floor or process elements. Furthermore, the data set will be improved by all the wireless sensory systems (e.g. power consumption, motor payloads, joint angles, instant temperatures), to generate a complete base model for real-time Digital Twins. Finally, the project theme is addressed in two critical manufacturing areas: 1) the product quality assurance in discrete manufacturing, exemplarily for modular production in the automotive industry and 2) the process quality assurance in continuously manufacturing, exemplarily for monitoring and optimizing the multi-layer carbon fiber printing process, for the aerospace industry.
Scientific objectives:
The objective of this project is to develop a robust and clear methodology for the implementation of a fully automated data acquisition process, able to provide output and receive input to and from a Digital Twin, in a hybrid and heterogeneous environment, with top-down, bottom-up, and wireless sensory data-generating technologies. For the development and validation of the methodology, a series of scientific and technical objectives are planned, as follows:
a) Identification of the Digital Twin (product, process, equipment, factory-scale, e.g. process, production system, production area, line, site of production and finally factory network) critical parameters, (e.g. type, source, format, importance), which should be continuously monitored in real-time. The main concept behind the Digital Twin structure is represented by a hierarchical organization, that will provide the essential data with higher priority and will feed the Digital Twin in real-time. It will enable the simulation of the process and material flow motions and activities to run steadily and without any need for external intervention. The central part of the data output will provide a general overview of the shop floor in real-time, while the compiled data from the second-degree sensors and any other IoT devices will provide the discrete events for comprehensive simulations.
b) Identification of the suitable technology for capturing real-time data (e.g. 3d laser scanning, safety camera surveillance, mobile and stationary robot local sensors) according to specific processes (assembly, production, or logistics). The core advantage of the Digital Twin approach and implementation represents the versatility and the capacity to virtually simulate any of the existing industrial processes; the only limitation is the data-gathering resources and the delivery time to the simulation. These technologies must be suitable for the type of process that is analyzed, the legal environment for data protection, and the external factors that might affect the sensors.
c) Capturing of the in-situ process parameters, as the main base for applying deep learning algorithms followed by an automated update of the Digital Twin. It can be achieved by employing state-of-the-art modeling and simulation technologies and systems, e.g. Siemens system portfolio. The data provided by the sensors is the first input in the database, where the program establishes the filtration priority, the protocol of utilization, and the method of implementation in the Digital Twin. The deep learning algorithms will process the resulted Digital Twin, the input data of the sensor, and possible correction of the operator if the human remains in the loop, refining the results and providing a continuous and incremental improvement.
d) Validation of the methodology in at least two use cases, e.g. quality inspection and 3D carbon fiber printing. Once the system is verified at a virtual and empirical level through experiments in a laboratory, a process of integration in real life is realized. This step has the main purpose for validation of the system and the optimization of the data analysis algorithm, Digital Twin integration, and improvement.
e) Incremental improvement of the methodology and publication to the scientific community. As the last step of the research, the method will be integrated as a tool in an industrial environment, tested for a variety of Digital Twins and the resulted data, a reference for the refinement procedure.
Thesis Committee:
Prof. Riedel (ISW), Prof. Herzwurm (BWI - Abt. VIII)
Problem to be investigated:
AI methods often have limited applicability on many manufacturing processes, as the data set that can be
recorded on a single plant in an acceptable amount of time is not large enough to train models. One solution,
especially for SMEs, would be to share the training data or fully trained models, with owners of a similar
equipment or other users facing a similar manufacturing process. However, as preliminary research has
shown, such an approach lacks acceptance, especially among German SMEs in the manufacturing industry,
as data sovereignty is seen as a competitive advantage that should not be available on the market without
control. Promoting such data externalization is, however, expedient to increase the effectiveness of AI
solutions and thus their added value.
Relevance of the research topic:
The relevance and applicability of AI methods in production can be demonstrated by various examples.
Probably the best known is the prediction of maintenance intervals, better known under the narrow term
predictive maintenance. Furthermore, some applications of quality control and fine control of processes can
be found in the context of object recognition. In addition, further applications for process improvement or
automated optimization such as generation of program code are the focus of ongoing research projects. An
approach that leads to providing SMEs with a training base for AI models can thus be seen as a catalyst for
research across the field. The current state of research does not yet provide extensive insights into how SMEs
can be encouraged to use AI methods. This points to the existence of a research gap at the interface between
manufacturing and business informatics.
Scientific objectives:
In the first step of the research project, based on [1], various architecture models will be derived that allow the
sharing of training data from industrial controllers and AI models trained from this data with third parties. Here,
both platform architectures and peer-to-peer architectures for data-based collaboration [2] will be investigated
and evaluated in particular based on the criteria of data sovereignty. In a second step, an acceptance study
among potential industrial users will be conducted based on the architecture models using several use cases,
such as arc welding. The findings of the acceptance study are to be fed back into the most promising
architecture model. For a dissertation on Faculty 7, the architecture is to be prototypically implemented and its
integation into the process application is to be demonstrated. At the same time, the prototypical
implementation represents a proof-of-concept validation and can also be assigned to design-oriented
business informatics. For a dissertation on Faculty 10, an exploitation model for the resulting AI-based
ecosystems will be elaborated and empirically evaluated. This step represents a proof-of-value validation.
[1] Schmidt, Alexander, Florian Schellroth, and Oliver Riedel. "Control architecture for embedding
reinforcement learning frameworks on industrial control hardware." Proceedings of the 3rd International
Conference on Applications of Intelligent Systems. 2020.
[2] Schüler F. und Petrik D. "Objectives of Platform Research: A Co-citation and Systematic Literature Review
Analysis." In: Management Digitaler Plattformen. ZfbF-Sonderheft, 75/20, S. 1 - 33
Thesis Committee:
Prof. Möhring (IFW)
Problem to be investigated:
Die spanende Fertigungstechnik unterscheidet das Spanen mit geometrisch bestimmter Schneide (Drehen, Fräsen, Bohren, etc.) und mit geometrisch unbestimmter Schneide (Schleifen, Honen, Läppen, etc.). Die Verfahren mit geometrisch unbestimmter Schneide weisen i.A. eine geringere Zerspanproduktivität auf. Dafür sind sie für das Erreichen höherer Bearbeitungsgenauigkeiten und Oberflächenqualitäten besonders geeignet. Mit ISO-Bearbeitungsgenauigkeiten von IT8 bis IT1 (bzw. Rz von ca. 1 µm) stellen die Schleifverfahren die wichtigste Technologie mit geometrisch unbestimmter Schneide dar. Zur Fein- bzw. Finishbearbeitung werden Schleifverfahren z.B. eingesetzt für Kugellagerlaufflächen, Lagerringe, Lagersitze, Werkzeuge, Turbinenbauteile, Zylinderköpfe, Nockenwellen, Ventilstößel, Dichtungsflächen an Gehäusen und Getriebewellen, Verzahnungen, medizinische Instrumente und Implantate, Formen (z.B. zur Herstellung von Kunststoff- und Glasprodukten, wie bspw. optischen Linsen) u.v.m.; also an Bauteilen, die auch bei hohen Stückzahlen eine gleichbleibend hohe Fertigungsgenauigkeit benötigen, oder aber an individuellen Komponenten mit spezifischen höchst anspruchsvollen Eigenschaften (z.B. Teleskopspiegel). Für die Gewährleistung dieser hohen gleichbleibenden Fertigungsqualitäten, auch bei sich ändernden Prozessbedingungen, bieten sich Überwachungssysteme an, die auf den Methoden der künstlichen Intelligenz (KI) bzw. des Machine Learning (ML) basieren. Aufgrund der undefinierten Schneidenanordnung und -gestalt an Schleifwerkzeugen, dem ebenso weitgehend undefinierten Einsatz von Kühlschmierstoffen, der zeitlichen Änderung der Werkzeugeigenschaften (durch verschiedene Verschleißmechanismen am Werkzeug) sowie der prozessimmanenten Durchführung von Abricht- bzw. Konditionierprozessen am Schleifwerkzeug unterliegt das Schleifen vielfältigsten, z.T. stochastischen, transienten Einflussfaktoren. Über den Materialabtrag am Werkstück und die Herstellung mikro- und makroskaliger geometrischer Eigenschaften hinaus, beeinflusst gerade die Schleifbearbeitung die Oberflächen- und Randzoneneigenschaften, und somit die Funktions- und Leistungsfähigkeit von Bauteilen maßgeblich. Die Beherrschung der multiplen Wirkzusammenhänge zwischen Prozess- und Werkstückcharakteristika stellt noch immer ein grundlagenwissenschaftliches Forschungsgebiet dar und stellt Industrieunternehmen vor extreme Herausforderungen. In der Forschung wurden bislang nur "Ausschnitte" der Gesamtzusammenhänge erfolgreich behandelt. Eine weitgehend autonome Selbstoptimierung von Schleifprozessen unter Berücksichtigung der Gesamtheit der multiplen Wirkzusammenhänge wurde bisher nicht erreicht. Methoden der KI bzw. des ML bieten hier besondere Potenziale, die wirkenden Einflüsse datenbasiert abzubilden, und somit die Umsetzung von Prozessregelungsstrategien zu befähigen.
Relevance of the research topic:
Die Produktionskosten eines zerspanend zu bearbeitenden Werkstücks werden vor allem durch das Zeitspanvolumen und den zunehmenden Werkzeugverschleiß bzw. durch die dadurch geringer werdende Bearbeitungsqualität bestimmt. Um dem entgegenzuwirken, werden Werkzeuge in der industriellen Praxis in der Regel vorsorglich deutlich zu früh ausgetauscht, was zu einem verschwendeten Standzeitpotential, längeren Rüstzeiten sowie höheren Werkzeugkosten führt. Die Verwendung von KI-gestützten intelligenten Werkzeugmanagement- bzw. Prozessüberwachungssystemen bietet neben der Möglichkeit der Ermittlung eines tieferen Verständnisses der Wirkzusammenhänge innerhalb des Zerspanprozesses, auch das Potenzial, die Standzeit des zugrundeliegenden Werkzeugs optimal auszunutzen.
Künstliche Intelligenz (KI) oder Maschinelles Lernen (ML) bieten dabei die Möglichkeit, das ganzheitliche Verständnis von Zerspanprozessen besser und breiter anwendbar zu gestalten. Nach dem ersten Anlernen der zu erstellenden Modelle kann durch in-situ Messungen mit Hilfe geeigneter Prozess- und Maschinengrößen wie z.B. Schwingungen, akustischen Signalen, oder Prozesskräften der Werkzeugverschleiß während der Zerspanung vorhergesagt werden. Im Umkehrschluss können die zu erwartenden Prozesskräfte sowie -temperaturen bei einem bekannten initialen Verschleißzustand abgeschätzt werden. Weiterhin können die Produktionskosten und Bauteileigenschaften bzw. -genauigkeiten wie Rauheit, Grathöhe, die im Gefüge vorliegende Mikrostruktur bzw. Mikrohärte bei bekannter Auswahl der Prozesseinstellgrößen in Zerspanverfahren vorhergesagt werden. Die Potenziale der Methoden der KI bzw. des ML sind in diesem Bereich der Fertigungstechnik noch nicht vollständig verstanden bzw. ausgeschöpft. Mit Hilfe einer ganzheitlichen Optimierung von Schleifprozessen könnten Ressourcen, Energie und Kosten in erheblichem Umfang eingespart werden, wobei die Bauteilqualität weiterhin gesichert, bzw. sogar erhöht werden kann. Diese Themenstellung ist einerseits grundlagenwissenschaftlich herausfordernd und an den Grenzen des aktuellen Standes des Wissens angesiedelt. Andererseits ergibt sich aus entsprechenden Lösungen ein enormes Umsetzungspotenzial in verschiedenen Industriezweigen.
Scientific objectives:
Aus der oben genannten Themenstellung leitet sich die wissenschaftliche Frage- bzw. Zielstellung des hier betreffenden GSaME-Stipendiums ab. Auf Basis von auszuwählenden Methoden der künstlichen Intelligenz und des Machine Learning sind Prototypen für selbst-optimierende Schleif- und Finish-Prozesse zu entwickeln, zu analysieren und zu erproben. Grundlegend sind hierzu zunächst geeignete Modelle zur ganzheitlichen Abbildung exemplarischer Schleifprozesse zu erstellen. Ein weiteres Teilziel besteht in der Zusammenführung von validierten, prädiktiven (KI-) Methoden zur Beurteilung sowie Vorhersage der Prozess-, Bauteil- und Werkzeugzustände am Beispiel konkreter Schleif- und Finishprozesse sowie von geeigneten Strategien zur Reaktion auf die festgestellten Prozesszustände unter Berücksichtigung von messbaren maschinen- und prozessspezifischen Einflussgrößen und Rahmenbedingungen. Unter den Rahmenbedingungen werden die Art, Anzahl und Anordnung der üblicherweise in Werkzeugmaschinen eingesetzten Sensorik verstanden sowie die Möglichkeit, Daten der Maschinensteuerung zu nutzen. Die Zusammenführung der Beurteilungs-/Vorhersagemethoden und Maßnahmen zur Prozessbeeinflussung soll in Form eines digitalen Assistenzsystems realisiert werden, das neben Modulen zur Datenerfassung, Zustandsbewertung und Handlungsempfehlung auch Schnittstellen zur Kommunikation der Module untereinander sowie mit dem Maschinenbediener beinhaltet. Zusammenfassung der Aufgaben:
- Validierte Erfassung von Prozess-, Maschinen- und Bauteilzuständen bei Schleif- und Finishprozessen unter Verwendung der derzeit in Werkzeugmaschinen üblicherweise eingesetzten Sensorik und verfügbarer Daten- und Signalquellen mit Aufbau einer validierten Messtechnik und Sensorik zur Erfassung notwendiger Daten und Signale
- Ableitung von Korrelationen und Beziehungen aus multisensorisch erfassten Schleif- und Finishprozessen zur Validierung der zugrundeliegenden KI-basierten Prädiktionsmethoden
- Entwicklung eines Assistenzsystems für die Kommunikation zwischen Bediener und Maschine
- Bereitstellung von Handlungsmaßnahmen auf Basis der Ergebnisse von prädiktiven KI-Methoden
Thesis Committee:
Prof. Hölzle (IAT)
Problem to be investigated:
In the field of production and product development, specific challenges arise in relation to collaboration and communication in remote collaboration. Virtual or augmented reality systems (AR/VR, often summarised under the term metaverse) have great potential here, supplemented by generative AI applications. Although specific VR software tools exist, these are often isolated solutions that only focus on the specific use case and do not offer comprehensive support for creative processes. This discrepancy between currently available technologies and the actual requirements in practice leads to sub-optimal utilisation of the potential of digital tools in the development and innovation process. There is a lack of integrated virtual tools that comprehensively and holistically support and promote creative and collaborative processes in distributed teams.
Relevance of the research topic:
The increasing spread of remote work is leading to a radical change in the world of work. In particular, the understanding and practice of collaboration has changed fundamentally due to the possibility of working from any location. Innovation and product development in particular are characterised by a high degree of interdisciplinary collaboration, as complex problems can only be solved through the interaction of different disciplines. In this context, creativity plays a central role in developing innovative solutions and thus securing competitive advantages. To support creative processes, the integration of digital technologies such as generative artificial intelligence (AI) or VR/AR technologies is rapidly gaining in importance.
In order to enable creative processes in this environment, there is a considerable need for research into a technical and procedural design that overcomes the limitations of today's widespread solutions.
Scientific objectives:
The central question of this research work is how the co-operation between physical and virtual actors in the product development process must be designed with a special focus on creativity and collaboration. To answer this question, the following aspects will be analysed:
1. what features must a tool have to efficiently support communication, collaboration and creativity even in remote settings?
2. how must a user interface for remote settings be designed that offers both verbal and non-verbal interaction options and at the same time supports direct spatial collaboration?
3. how can generative artificial intelligence functions be optimally made accessible and used within the tool in a user-centred manner?
4. what can a technical system architecture look like that maps these functionalities and can be seamlessly integrated into existing development processes at the same time?
5. which criteria and metrics can be used to evaluate the tool, in particular with regard to visual quality (environment and avatars) and the quality of interaction (e.g. latency)?
This research aims to develop an innovative solution that meets the requirements of modern working environments while fostering creative collaboration. The research will focus on how technical systems and design principles can be combined and implemented to create an effective and user-friendly tool for distributed engineering teams.
Thesis Committee:
Prof. Mehring (IMVT)
Problem to be investigated:
Bremsfeinstaub bildet neben der aus Verbrennungsvorgängen hervorgehenden partikulären Phase den größten Anteil an gesundheitsschädlichen, lungengängigen, aus dem Verkehrsaufkommen hervorgehenden Feinstpartikeln (PM2.5 und kleiner). Größenverteilung und Materialzusammensetzung der Partikel hängen von den Randbedingungen des betrachteten Bremsprozesses und der dabei eingesetzten Materialien ab. Sie sind zudem bedingt durch thermische Einflüsse und Partikel-Partikel Wechselwirkungen in der Gasphase nach Freisetzung durch den Bremsprozess. Die Effektivität eines zur Abscheidung des entstehenden Bremsstaubs eingesetzten Abscheidesystems ist von physikalischen Einflüssen geprägt, die sich über mehrere Größenskalen erstrecken: Makroskopische Strömungsführung am Bremssystem und Strömung der staubbeladenen Luft im Abscheidesystem, Temperaturverhältnisse im Gesamtsystem, Größe und Form der Staubpartikel, Oberflächenbeschaffenheit der Staubpartikel und der Komponenten des Abscheidesystems, insbesondere auch die Makro- und Mikrostruktur eingesetzter Filtermedien.
Die Problemstellung der Arbeit befasst sich mit der Erfassung der relevanten physikalischen Prozesse, die für die Partikelbildung, Partikeldynamik und Partikelabscheidung im Brems- und Abscheidesystem eines Personenkraftwagens relevant sind. Hierbei geht es zunächst um die virtuelle Entwicklung eines geeigneten Prüfstandes, anhand dessen die entsprechenden physikalische Modelle entwickelt, validiert und in ein Gesamtsimulationsmodell umgesetzt werden. Letzteres soll es erlauben, effiziente Bremsstaubabscheidesysteme für zukünftige Kraftfahrzeuge gezielt zu entwickeln.
Relevance of the research topic:
Feinstaub in der Atemluft stellt neben gasförmigen Schadstoffen ein erhebliches Gesundheitsrisiko dar. Dies gilt insbesondere in innerstädtischen Wohnbezirken aufgrund des dort vorzufindenden erheblichen Verkehrsaufkommens. Entwicklungen zur Reduzierung schädlicher Staubpartikel und insbesondere Feinstaub (PM10 und PM2.5) konzentrierten sich in der Vergangenheit vorwiegend auf die Verbesserung von Verbrennungsmotoren und deren Abgasreinigungsanlagen. Fortschritte in diesem Bereich führten dazu, dass sich heute ein bedeutender Anteil der in der Atemluft befindlichen Kleinstpartikel aus Bremsabrieb, Reifenabrieb und sich von der Fahrbahnoberfläche ablösenden Partikeln ergibt. In Bezug auf die Umweltbelastung durch Bremsabrieb sind insbesondere diejenigen Bereiche von Bedeutung, wo Bremsstaub erzeugt wird, und die sich durch eine hohe Populationsdichte auszeichnen, d.h., an Fußgängerampeln im Straßenverkehr, an Bahnsteigen in Bahnhöfen und den Haltestellen von Straßenbahnen und U-Bahnen.
Trotz des zunehmenden Ausbaus der E-Mobilität und regenerativen Bremssystemen in Elektrofahrzeugen auf Straße und Schiene wird Feinstaub aus Bremsabrieb auch mittelfristig eine Belastung für Mensch und Umwelt darstellen.
Vor diesem Hintergrund verfolgt die vorgegebene Themenstellung: a) die Entwicklung eines repräsentativen virtuellen Prüfstandes, welcher die aerodynamischen und thermischen Bedingungen im Bremsraumbereich eines Pkws genau abbildet (dieser Prüfstand wird vom Industriepartner gebaut und in Betrieb genommen) und b) die Entwicklung geeigneter physikalischer Modelle zur Beschreibung der Bremsstaubbildung, dessen Dynamik im Strömungsfeld und dessen Abscheidung. In Bezug auf b) ist eine sukzessive Vorgehensweise geplant, wobei zunächst von vereinfachten, semi-empirischen Modellbildungen aus, auf komplexere (mehrskalige, multiphysikalische) Modelle zugearbeitet wird. Ziel ist es, ein Gesamtsimulationsmodell bereitzustellen, um in Zukunft effektive, ökologisch und ökonomisch nachhaltige Bremsstaubabscheidesysteme für Anwendungen im Automobilbereich entwickeln zu können.
Scientific objectives:
Die wissenschaftliche Fragestellung befasst sich mit der virtuellen Entwicklung eines geeigneten Prüfstandes für Bremsstaubabscheidesysteme im Pkw-Bereich und der physikalischen Modellierung der Bremsstauberzeugung und Bremsstaubabscheidung . Von besonderem Interesse ist hierbei die Abbildung des Einflusses von auftretenden Temperaturprofilen auf die erzeugten Bremsstaubpartikel und die Partikelverteilung , sowie der Einfluss der Luftführung am Bremssystem und im Abscheidesystem auf Partikelbeladung und Partikelabscheidung.
Thesis Committee:
Prof. Verl (ISW), Prof. Graf (IFSW)
Problem to be investigated:
Laser-based processes offer great potential with regard to a completely flexible, universal and self-configuring production of complex components (software-defined manufacturing). For this, manufacturing systems are needed that can adapt independently to new manufacturing processes and can be used universally. An important component here is universal process control that functions independently of material and process parameters. In the current state of the art, control parameters for, for example, welding depth control or height control in wire build-up welding are manually optimised for every welding task with a measurement signal, above all material but also process parameters dependent. As a result, a modification such as a material change or a change in the welding task leads to lengthy adjustment operations on the production system with a high level of effort.
Relevance of the research topic:
Laser processing methods have an undisputed potential to be the tool for individualised and function-optimised product design in combination with resource-efficient manufacturing for future industrial production. Laser beam welding of metallic materials is an indispensable manufacturing process for many applications in the future of mobility, such as in the production of battery cells, joining tasks in the manufacture of electric motors, but also in the production of components using additive manufacturing processes such as laser powder bed fusion. In all these areas, control of the welding process is the basic requirement for efficient and high-quality production. As a production technology process, laser processing is located in the thematic core of the GSaME and the topic meets its scientific ambition with its interdisciplinary research needs.
Scientific objectives:
How can a universal control system i.e. at least material and process parameters but also perhaps process-independent method for process control be designed using self-learning algorithms? For this purpose, a control system has to be developed on the basis of established measurement methods, starting with simple control tasks such as the welding depth control in laser beam welding or the height control in laser wire deposition welding, which independently adjusts to the changes in the measurement signals and process dynamics for different materials and process parameters. For this purpose, it is necessary to determine the differences in the signal acquisition and the changes in the process dynamics experimentally and to take them into account when designing the control system. For this purpose, modern imaging methods in process analytics are available, such as the X-ray diagnostics available at IFSW, state-of-the-art high-speed video methods with up to 100,000 images per second with high spatial resolution, but also experiments at large-scale research facilities such as the German Electron Synchrotron (DESY).