I’m very much a “learn by doing” type. To this day, there are entire areas of Photoshop and Blender I’ve never explored just because I haven’t needed to use them for personal projects. I’m sure this pattern will hold true for Unreal Engine 5 as well. Of course, developing an entire video game requires touching on many more aspects of the engine that simply creating static meshes, so hopefully I’ll start to feel well-rounded at some point. At this stage of the game dev, it is still very much with the baby steps.

Also, it turns out designing an RPG take a lot of knowledge. And a lot of specialized skills, many of which I don’t possess. It would be really helpful to be able to utilize my Stable Diffusion installation to help along the way.

On the other hand, I’m well aware that the model libraries I have for Stable Diffusion probably utilize artwork by artists who weren’t paid for their work to be so used, or even consulted on the matter. How can I even dabble in the world of artistry – of any kind – if I’m just ripping off other artists to do it?

I suppose I could tell myself that I’m “just learning”, and that anything I develop at this stage probably won’t be good enough to share with anyone else. But maybe I’ll put in the time and effort, and that assumption will prove untrue. Or perhaps I’ll still want to re-use some assets from a tutorial project. Do I want to spend my time wading through everything I made trying to figure out if it was something I … actually made?

Might as well learn how to do things right the first time.

That means establishing where I am at personally with using AI to enhance a project I might want to eventually profit from, or even to share with others.

Right off the bat, I ran into tools that will take a 2D image and convert it into a 3D model one can import into Blender and, subsequently, into Unreal Engine. I’m not sure the entire process involved, but it didn’t even seem to require any rigging. And given that many of the demo images used on the website for these tools were taken from AI generated text-to-image prompts in the first place, I’m not really comfortable take the chance that a text prompt to fully-realized 3D character won’t just be me ripping off someone else’s art.

I find I am comfortable converting my own drawings into 3D images, so there’s a reason to break out the old Prismacolors. I’m also comfortable converting my own photographs – or many freeware photographs – of fairly standard items into 3d models: like a dinner plate or an old radio. And in those cases, it’s for the mesh. I’d still be creating custom textures.

When it comes to AI involvement in creating a player character, for example, I was willing to use Stable Diffusion to create a good 2D picture of a bald guy with decent definition. I used this to create two separate 3D models: one for the full figure, and a separate one for the head alone.

There won’t be a single AI pixel or vertex on the finished mesh. Instead, I’m using the original 2D image to create a silhouette I will then extrude over the 3D model I imported from the AI, leaving me with a full model I can then sculpt to be something much more generic and less stylized than what the AI produced.

If it sounds like 3D tracing, well, it kind of is. I’m just not good enough of an artist to produce a suitable figure drawing to give the AI, and the thought of feeding it a picture of me in my underwear makes me want to cry. Still, I do plan on re-topologizing everything as I want the player character to be fully customizable.

The 3d model that the AI produced would be completely unsuitable for a realistic-in-appearance human character at any rate. The proportions are disastrous and it is nowhere near detailed enough around the fingers and toes. But I can squish it into a suitable shape and use it as sort of a seamstress’ mannequin to piece together a frankenmesh I can then use as a guide to build something more realistic in appearance.

Mostly, I love making my own ish, even if it isn’t always the best artistry on the planet. AI isn’t as big of a draw as I thought it would be. It’s nice to have the option to create instant guides to assist the eye in getting anatomy in the correct spot. There’s still too much about rigging and weighting I’ll never know if I let an AI do it for me, plus the generated UV maps are practically a hate crime, and I definitely like creating all my own materials even if I do suck at Photoshop.

I guess the gist of the above is this: if I ever share screenshots of anything I’m doing in UE5, then it’s legit. Either I made it myself or I utilized an asset provided through Unreal Engine and used according to the license, and which I would indicate at the time. (I don’t see myself custom building a lot of landscape pieces or plants at this point, but one never knows).

Other than that, I know that Epic Games has provided Unreal Engine with the ability to turn an audio recording of a person speaking into a lip-synced animation, and that definitely uses AI. Either Epic Games developed that technology themselves, or they licensed it from someone. All the audio files I’d be creating myself. So I’m good with all of that.

I guess I could have summed up this entire post with the following statement: I don’t mind getting an AI assist for any computational process, just so long as I’m not stealing the hard work of unwilling participants. Optimized meta-humans as NPCs and 3D imports of things like light bulbs or car tires: check. Fully AI-generated characters and 3D imports of specifics like a Chippendale lamp or a Rolls Royce: no.

AI is developing technology, and people seem all up in arms about its ethical use. I’m not judging anyone else. I don’t see how I could complete any gaming project on my own without some artificial assistance, so I thought I’d share my thoughts on the subject perhaps just to nail them down for my own peace of mind.


Leave a comment