Sunday, February 3, 2019

Photogrammetry test.

Photogrammetry is a technique that allows you to take a bunch of photos of something, and with mathemagics to transform those 2D images into a 3D model.

It's a boring ant nest, but I'm happy with the result, given that i took 29 images. (The more the better.)

Use your mouse's scroll to zoom in and out, left click and drag to rotate the view, right click and drag to pan around. The "f" key will go full-screen and then back out.


(Direct link to the sketchfab page if you want to view it over there: https://skfb.ly/6GHuS )

Sunday, December 30, 2018

This took way longer than it should. (And what I *think* it taught me.)


*Hitting hand rhythmically on the table, shacking stuff...* This Took Longer Than It Should!

Now, what does that video show anyway? Just some text with a typing-in effect.
Yeah, that's the front of it, or rather one of the 4 versions of it. It's the dialog system that I wanted to make for a game.

There's a function that loads a dialog file created from my custom dialog editor, and the dialog function looks at the NPC, what kind of a dialog they are supposed to have, and enables the correct kind of screen to show said dialog. It can be either a simple "Hello" / "Hi, you ok?" kind of dialog, a long conversation, a dialog tree with available options as buttons and lastly, an interrogation mode. Where you can actually construct questions, and if those questions are something that the NPC has the answer to, they will reply accordingly.

"Vanilla stuff" you might say and you'd be right. But this is the first time I've been trying to make something like that and on top of that I had my ideas of what I wanted to have them do. First of all I wanted the function to be "NPC-agnostic." I didn't want to have to script stuff constantly, I just wanted to pass the name of the character you are talking to, along with some flags of what kind of a dialog it is, and have it come up. Standard stuff. But then I wanted to have the system do stuff like receive an item through dialog. Or enable a new dialog topic, mid-questioning. And then I wanted to have the system to be able to control the non-player character as well, for example, you said something to the NPC that they didn't like and that perhaps made them anxious, so their pose on screen should change. But what if I wanted to have something happen externally at a certain point during the dialog?! Like something blown up or a phone ringing! And of course, it should be able to jump from simple dialog to interrogation mode.
Or! Or ... what ever! The system should be agnostic to everything! Make it able to do *whatever* "I know how I'll do it! Every dialog line can have its own flags and codes embedded in it, and the system would check those codes while it's displaying the dialog, and fire off other functions and animations all over the place! Do all the things!"

People with experience know where this is leading up to. Bloaty, messy code. It does work now, but I noticed that I'm cleaning things as I go along, and with cleaning I mean selecting blocks of code and hitting "Del".

Since the dialog system in my mind was a single thing, I did just that. Made a big file with lots of functions in it, for the various dialog types.

NO! *table-hitting intensifies*.
Don't do that George. Had I asked around some more I think it would've been obvious that it will just make the code hard to follow when I would inevitably need to revisit it later on. A bunch of functions clumped up together without needing to talk to each other.
Since there are different flavours of the dialog, have a file for each dialog type with only the relevant functions to that type together. And for the dialog lines that were read from the file, just use a single struct to place them in and have it being read from the different dialog classes. It doesn't have for everything to be in the same class.


In the video just above, the first NPC is asking our main character to go and interrogate the other Character about stuff. (That Character's name is "Ch". Yes, I was feeling creative that night... Now that I think about it, it should be Ch1 and Ch2. Oh well.)

The video shows 3 different types of the dialog system at work, yes the visuals do need work and not all places have that "type-in" effect - but it's coming. It's a  caller-friggin-agnostic function now so all that needs to be done is to call it for that dialog panel...

Simple dialog, that unlocks the dialog tree type for the other character, which in turn leads to the interrogation mode.

*Hits table* Don't Make Monolithic Systems If You Don't Have To.

Oh btw, there's a ready made 3rd party system for dialog that could be utilized...
I guess it can be modified to do *all the things* and become monolithic blob if required. 

Sunday, July 22, 2018

Unity starter tips - performance.

"Uh oh. It's another one of those starter tips thing from people that thing they know game engines."
Yeah, it's one of those things. Things that I run into my self, and had to hunt them down all over the place when my projects had issues. Of course this is by no means the be-all-end-all list. But it is a start!

1) Remember to set up your project correctly in the Player Settings, set up skinning and batching depending on what you need (Leave batching on of course...), what to log...

2) Know your target platform. Not all platforms are created equal. You won't put the same effects (or the same quality effects) in PC and mobile. Mobile will just get single digit fps, if it will even work in the first place.

3) Set up your physics layers! If you have a physics layer for your player character and you know for sure that the character will never touch or interact in any way with objects belonging to a certain physics layer, disable their interaction in the physics matrix. There is no need to let the engine try to figure out if these two are touching since you know that they won't.

4) Bake stuff in maps! Even if you are targeting DirectX12/Vulkan with all the eye-candy possible, why waste cycles with having dynamic effects enabled for something that you know is going to be static? Bake those shadows and that occlusion. You'll take a tiny bit more to set them up, but you'll save on resources that you'll be able to use elsewhere and the people playing your game won't complain about fps drops.

5) Batch stuff! And Ι'm not talking only about dynamic batching because if abused, it may end up costing frames instead of saving. If you are making a detailed environment in a 3D modeling package, when it's time to export stuff, export all the stationary objects as one mesh, so when they go over in the real time engine then they will be treated as a single mesh.

6) Texture atlas. Continuing from #5, combine your textures for that scene into one big one, so when the engine goes to load the textures it won't go like "stop rendering, read from disk, load into memory" for every single texture. Sure you can't combine eeeeverything, especially the textures for character because if those characters are used in a different scene, then the whole texture will be loaded, even the pieces that aren't used.

7) When making a new script, always remove that Update function if it won't be used. If it's left there in the script, then the engine will still go in the loop every frame to see what it should do in it. It won't skip a function just because it's empty.

8) Comparing Tags. gameObject.CompareTag("thing") is faster than gameObject.tag=="thing".
So if trying to see if this is the right tag, use CompareTag. It is a native Unity function, compiles to fewer calls, it has less of an overhead... long story short, it uses less stuff and things add up.

9) Pool objects. It's really tempting to simply have a call that creates/spawns an enemy from a script. You just write it and when the game is about to use the enemy, spawn it. Well... that introduces hiccups when the entity is spawned, especially if there are alot of stuff going on with it. And if you have lots spawning; as we said already - things add up. Have a pool of enemies ready, just disabled or hidden out of sight, and when needed teleport the enemy where it's needed and have a boolean for that enemy that he is awake. When destroyed/killed, simply hide them again and set the boolean that they are free to be used again later on. This is really especially useful for example when dealing with waves of enemy ships. When they blow up, don't Destroy() the object, simply hide them.

10) Try to use Coroutines instead of Update functions in scripts. As we said in #7, Update is always checked even if it won't do anything. One of the first ways people just starting unity will use to try and control objects, is Update. At first, my self included at first, people will put a boolean in an IF check, and if true the object will do something. That hogs the main execution thread of the project. So use Coroutines instead. What are Coroutines? They are small routines that can be spun off into their own thing, running along the main thread, but unlike Update, they don't stay there looping and never letting the main thread continue. Coroutines can be called when needed, paused and delayed and when done they stop and go away. Useful for timers and controlling stuff!

Thursday, May 3, 2018

Previously unseen stuff - digital paintings

These are unused drawings done in Photoshop, for an older project that weren't used in the end.
But I'm still proud of them :P




Thursday, April 12, 2018

Spheres and UVs.

Balls. I'm not too fond of them when it comes to UV mapping one. Ironic, since the two games I made so far have nothing but balls. Spheres... I mean spheres.

Now those not in 3D graphics might wonder what UV mapping is. You see in 3D graphics you have the object represented in points that connect with each other in a certain hierarchy to form triangles, and those make the mesh. Think of it as a game of connecting the dots only in 3D space and that will give you a wireframe. Then you'd need a canvas to drape that wireframe and give it color. The traditional way of doing things is to make your mesh (your object) and then "unwrap" it, which is to take the mesh and open it up and flatten it into a 2D image to make it easy to paint on it. There are many ways of unwrapping a mesh and i'm sure that many have seen how a face mesh is unwrapped to its UV map. The easiest object to unwrap is a flat plane as that one is pretty much already unwrapped. But spheres... If you want to do some detail work on their surface you'll need to think of where you want to focus and unwrap accordingly. One would need to think of where the bulk of the detail is going to be and decide on the way to unwrap it, as depending on the way you unwrap a sphere, you might get different kinds of artifacts or stretch marks.

Default UV unwrap, next to its sphere mesh. If you noticed at the top and bottom of the right side, (the unwrap) you'll notice that the edges end up looking like a saw-blade. It's really easy to have details looking fine at the red zone, but if you try to add detail to the blue area, it will look as if it has stuff cut off. That's because you'll have pixels of your texture be unassigned to actual polygons. There IS a way to have the sphere look better but I have yet to see a default sphere from a modeling program look good on its poles. And now with VR and skyboxes in computer graphics, it gets more and more important to pay attention.
























If you click on the image above to open the full sized one, take a look at the center of the circles.
The image is from a game that was unfortunately shut down while still in beta and it shows artifacts in the center of the skydome. What you are looking at is me pointing the camera way up to look at the "sky", while waiting for the other players to get ready and start the match. The skydome is a sphere and while the rest looks fine, the last bit where the sphere ends up in the pointy bits looks collapsed. That happens when the pointy bits all connect to a single point and sort of pull the texture along.

Sure there is an "auto" button, but that one almost never gives you the results you might want, at least not with me... It could be a starting point for other kinds of meshes that you would then edit to bring up to snuff.

The way I ended up making my UV map was to make a custom unwrap, by actually projecting a flat plane towards the poles from the top, and then put the skirt that ends up being stretched to its own place. I thought of doing that as i wanted the poles to be easily texture-able.

I was also able to pack different looks for the ball into the same texture file, just to have everything in the same file. In hindsight it would have been nicer if i would have 4 textures in half the file size and load the one needed instead of having 1 big texture file loaded in memory and only use a quarter of it.



I really, REALLY wanted to make texture atlases for everything. Even when it wasn't needed...

The poles are both projected on a flat plane, leaving them with an area that can be easily textured. Top and bottom of the ball are reflected and placed on the same area in the UV map, with the waist being cut and projected on that yellow strip on its own.

I promise, the next time I need a sphere, I'll just either use a gradient from top to bottom, or just place a single color and skip texture making on the whole.

Sunday, December 31, 2017

In anticipation of 2018

To be honest, my 2018 is actually looking to be a nice one, but you know 
what they say about plans...

Have a great 2018 people, lets just try to make 2018 a good year and try not
to blame it for all the dumb stuff we'll cause throughout the year :P...

Tuesday, October 31, 2017

Know your self. - VR app idea.

We know our selves by what others say about us mostly. What you are able to see immediately are your hands and feet. Even your face, you are seeing that flipped over if you see it through a mirror, and photos don't convey much about your physicality most of the time because you can't see your self walking; you can't understand your own stride.

We could use tech that's readily available today though to sort of fix that. With the use of a 3D depth camera we can both capture our bodies and even animate them with motion capture. And i'm not talking about the expensive motion capture used in movies. We can use a €200 depth camera along with a €80-€100 3D tracking application. For the mesh we can use one of various free depth-to-3D software with said scanner, or use photogrametry. (You take a crapload of photos of your subject in a sphere arrangement, only looking inwards instead of away from the center of the sphere, then feed the photos to a software that will generate the 3D object along with the right colors.)

Then wear your VR helmet and walk side by side with your self, or stand back and actually watch how you move about . Of course you'll need to know how to use a game engine like Unity or Unreal, but heck, everything is there. All the tools needed are readily available.

"Why don't you do it then?"
I would, but i'm in the middle of something else. Shouldn't take more than a month even if things go south at some point :P. Any takers? A uni or something?