Graduate Master in IT, Games Technology with experience in programming in large and small teams of varying disciplines. Organized and analytical with a desire to improve technological as well as interpersonal skills.
In my master thesis, we developed a creative AI tool for creature generation to assist in a game designer’s creative process.
My thesis project wherein we developed an AI tool to assist game designers in their ideation process of creatures.
2023
An individual project wherein I programmed realistic looking environment using C++ and OpenGL.
2022
In a world, two deities are going through the worst divorce in existence with the population in the middle of it - and now you are too.
2022
CREAC
The tool utilises Stable Diffusion, which is a latent diffusion model capable of generating images from a text prompt provided by the user. The text prompt describes the image of interest. In addition, CREAC utilises interactive evolutionary computation, such that the text prompts are the creative product of both the designer and AI. A final study was conducted with 20 participants. The majority 80% participants in the final user study preferred CREAC over the control versions, and 95% would use CREAC again to create
creatures.
LANGUAGE: Python
MODEL: Stable diffusion
TIME: 6 months
TEAM: Pair project
We tested our thesis with two user tests each time we had prized the users could win. They consisted of bespoke creatures, where the images were generated from Stable diffusion, and we would embroider or crochet them. I would crochet the creatures.
Creature generated by Stable diffusion with the prompt “Cute little blob creature in a forest”
My crochet bespoke creature made from the image from stable diffusion. I made the pattern myself.
Two versions were created for the prototype testing, version A and version B. The graphical user interfaces (GUIs) can be seen in fgures. Version A was using Stable Diffusion with no interactive evolutionary computation (IEC) and was used as the control version. The participant could input anything they wanted in the text prompt but with a similar GUI to version B. Version B would use IEC. Both version A and version B uses Stable Diffusion to generate images. It takes approximately 30 seconds for each version to generate four pictures.
In summary, the majority of participants preferred version A. They preferred the freedom of this
version, which was refected in both the survey responses as well as in the telemetry with some
prompts being unrelated to creatures.
The following changes were implemented in the next iteration
Screenshot showing four creatures with four different random seeds and the prompt "happy dragon", generated with version A
Screenshot showing four creatures with four different random seeds and the prompt "painting of a blue creature with big teeth surprised by studio ghibli", generated with version B. The prompt was generated by the use of the randomise button.
A RAINY DAY
To mimic a rainy day some effects were chosen: Rain,
wet surfaces, water splashes and rain occlusion.
LANGUAGE: C++
SOFTWARE: OpenGL
TIME: 2 months
TEAM: Individual project
COURSE: Graphics programming
First, I developed the rain box which follows the camera. At the same time I implemented the rain occlusion, to have the rain not fall through the roof of the house. I created a depth map of the scene from the rain’s direction and test if the particles depth was larger than the depth map, and set the alpha to zero in order to make the particle transparent.
The box has been up scaled from a 1x1x1 cm cube to surround the camera. This had been done so I could work in a space from 0 to 1.
The box contains a fixed number of raindrops. When the rain falls out of the box they get reset starting to fall from the top again.
Next, I implemented the wet surfaces. Left is before I implemented the wet surfaces and right is after the implementation was done.
How wet a surface becomes also depends on the type of material. Rough materials tend to have a darker albedeo and higher specular when when wet than metals. To simulate that I used the materials roughness and metal textures.
The roughness texture is then used in the Trowbridge-Reitz GGX distribution and Schlick-GGX. The metal texture is used to mix between the dry and wet albedeo in order not to make the car more dark
since it is made of metal.
The final step was to implement the water splashes. I generate a splash when the rain particle collides with the surface. To do this I calculate where the rain particle is coming from and from that point of view, I project it onto the surfaces in the scene.
I utilize the same depth map for the rain occlusion again to test if the particle has collided with a surface. If it has done so I can create the splash. All of this is done in the vertex shader. The geometry shader for the rain splashes makes a particle into a quad, to map the texture of the splash on.
The game takes place in a world split inthree realms - the technology realm in which the technology deity, Bitinax, resides, the middle island with a temple, and the nature realm in which the nature deity, Magnaphyte, resides.
When the game starts, the player learns that they have been given the role of ’Messenger’ who is responsible for delivering messages between the two deities.
In start of the process of creating The Mender we settled on the world, the core all game play, themes and style we were to aim for. These are some of the slides we presented for the class after the first couple of weeks of working on the project.
We had the pallets of the two sides of the world completed and started creating assets. During this time, we started developing the starting point of the dialog system, and the delivery system of the letters. We had to build the dialog from scratch since there was not a free dialog system available in Unity at the time. We implemented the dialog system to read ink files, meaning the designers could just change the files to change the dialog.
Copyright © Alle rettigheder forbeholdes
Socials