Thursday, April 12, 2018

Spheres and UVs.

Balls. I'm not too fond of them when it comes to UV mapping one. Ironic, since the two games I made so far have nothing but balls. Spheres... I mean spheres.

Now those not in 3D graphics might wonder what UV mapping is. You see in 3D graphics you have the object represented in points that connect with each other in a certain hierarchy to form triangles, and those make the mesh. Think of it as a game of connecting the dots only in 3D space and that will give you a wireframe. Then you'd need a canvas to drape that wireframe and give it color. The traditional way of doing things is to make your mesh (your object) and then "unwrap" it, which is to take the mesh and open it up and flatten it into a 2D image to make it easy to paint on it. There are many ways of unwrapping a mesh and i'm sure that many have seen how a face mesh is unwrapped to its UV map. The easiest object to unwrap is a flat plane as that one is pretty much already unwrapped. But spheres... If you want to do some detail work on their surface you'll need to think of where you want to focus and unwrap accordingly. One would need to think of where the bulk of the detail is going to be and decide on the way to unwrap it, as depending on the way you unwrap a sphere, you might get different kinds of artifacts or stretch marks.

Default UV unwrap, next to its sphere mesh. If you noticed at the top and bottom of the right side, (the unwrap) you'll notice that the edges end up looking like a saw-blade. It's really easy to have details looking fine at the red zone, but if you try to add detail to the blue area, it will look as if it has stuff cut off. That's because you'll have pixels of your texture be unassigned to actual polygons. There IS a way to have the sphere look better but I have yet to see a default sphere from a modeling program look good on its poles. And now with VR and skyboxes in computer graphics, it gets more and more important to pay attention.
























If you click on the image above to open the full sized one, take a look at the center of the circles.
The image is from a game that was unfortunately shut down while still in beta and it shows artifacts in the center of the skydome. What you are looking at is me pointing the camera way up to look at the "sky", while waiting for the other players to get ready and start the match. The skydome is a sphere and while the rest looks fine, the last bit where the sphere ends up in the pointy bits looks collapsed. That happens when the pointy bits all connect to a single point and sort of pull the texture along.

Sure there is an "auto" button, but that one almost never gives you the results you might want, at least not with me... It could be a starting point for other kinds of meshes that you would then edit to bring up to snuff.

The way I ended up making my UV map was to make a custom unwrap, by actually projecting a flat plane towards the poles from the top, and then put the skirt that ends up being stretched to its own place. I thought of doing that as i wanted the poles to be easily texture-able.

I was also able to pack different looks for the ball into the same texture file, just to have everything in the same file. In hindsight it would have been nicer if i would have 4 textures in half the file size and load the one needed instead of having 1 big texture file loaded in memory and only use a quarter of it.



I really, REALLY wanted to make texture atlases for everything. Even when it wasn't needed...

The poles are both projected on a flat plane, leaving them with an area that can be easily textured. Top and bottom of the ball are reflected and placed on the same area in the UV map, with the waist being cut and projected on that yellow strip on its own.

I promise, the next time I need a sphere, I'll just either use a gradient from top to bottom, or just place a single color and skip texture making on the whole.

Sunday, December 31, 2017

In anticipation of 2018

To be honest, my 2018 is actually looking to be a nice one, but you know 
what they say about plans...

Have a great 2018 people, lets just try to make 2018 a good year and try not
to blame it for all the dumb stuff we'll cause throughout the year :P...

Tuesday, October 31, 2017

Know your self. - VR app idea.

We know our selves by what others say about us mostly. What you are able to see immediately are your hands and feet. Even your face, you are seeing that flipped over if you see it through a mirror, and photos don't convey much about your physicality most of the time because you can't see your self walking; you can't understand your own stride.

We could use tech that's readily available today though to sort of fix that. With the use of a 3D depth camera we can both capture our bodies and even animate them with motion capture. And i'm not talking about the expensive motion capture used in movies. We can use a €200 depth camera along with a €80-€100 3D tracking application. For the mesh we can use one of various free depth-to-3D software with said scanner, or use photogrametry. (You take a crapload of photos of your subject in a sphere arrangement, only looking inwards instead of away from the center of the sphere, then feed the photos to a software that will generate the 3D object along with the right colors.)

Then wear your VR helmet and walk side by side with your self, or stand back and actually watch how you move about . Of course you'll need to know how to use a game engine like Unity or Unreal, but heck, everything is there. All the tools needed are readily available.

"Why don't you do it then?"
I would, but i'm in the middle of something else. Shouldn't take more than a month even if things go south at some point :P. Any takers? A uni or something?

Saturday, September 30, 2017

Point and click games -The way you interact should make sense.

Most people my age (30s+) that played a lot of games on their PCs, should know about point and click games and their moon logic puzzles. And on top of that, you had those interaction controls that had 3 different action buttons that could be combined in a single one, or had some actions that could have been done in different ways and would in-fact still make sense.

I'm not talking about using a fish that you found at the very start of the game on a robot that is waiting at the very end of the game, or for waiting the guy to go to the toilet and get wrecked by an octopus so that you can get his belt... (If anyone gets the references, you get a high five.)

I'm talking about having a walk command, while the game could simply take use the mouse pointer to where you wanted to walk, and even if you did click on an interactable object, simply walk to the closest point. Or if you had a fridge for example and had an "Open", "Close" and "Interact" commands, the open and close commands (while redundant) could do what they were intended to do, but the "interact" one should work as well; open the door if it's closed, and close it if it's open.

Same with inventory management. In almost all of point and click games you would eventually go to a point where you had to find a couple of objects and combine them to do something else. And in most games I've played of that kind (dare I say all?) the player would have to get both objects and then combine them in the inventory system. Why couldn't I just pick up one part of the object, go close to the second part that could be on a table and then simply "use" the first item directly on the object that is just sitting there? This issue is still present in some form in today's games, latest example I've personally seen being FF14. To give something to an NPC, I had to initiate dialog, and the game would then give me a small on-screen box to place the item in it, either by dragging the item from my inventory to the NPC's inventory, or selecting "give". Why couldn't I just drag the item from my inventory onto the NPC and then program the NPC to simply accept the item and respond with a "Thank you!" kind of a reply? (I'm not talking about point and click games where you have to solve a puzzle by using an item on a character, but on RPG games where characters usually want some items.) Screenspace to world space conversions can be done, so the game can indeed know if you are pointing at something. That's how mouse pointers work in 3D games and applications. (I have to clarify though, that if I remember correctly, I have seen a couple of games that dragging the item from the inventory on the character would give them the item, but as far as i could tell that is still not the norm.)

"You are splitting hairs dude". Well... you may have the best software out there in terms of what it can do, but if it frustrates users with the controls, or your users keep trying to do things differently, then perhaps the controls are not as intuitive as the designer may think and might need tweaking. It's why most 3D programs or photo editing/design packages ended up having similar controls. It goes back to the "you are not your clients" argument. It's only natural that the designer won't have any serious issues with their own interface, but any issues will come up when the client tests it. Case in point, the first pilots that tested the very first F-16 models. Supposedly the new flight system had a control stick that didn't move physically, but instead registered pressure along its axis depending on how the pilot was acting on it. You see the plane was using a cutting edge technology of fly by wire, which means that the pilot didn't actually control the plane's control surfaces but instead was sending information to a computer and the computer was keeping the plane in the air. The issue was that the pilots couldn't tell how much pressure they were exerting on the stick, the engineers at General Dynamics hadn't thought of that or didn't think that it would be an issue. It was the pilots that came back and said that the feeling needs to change as they couldn't tell how much force they were applying, which that would affect their flying. The solution was to add some leeway to the flight stick to move around.

Tuesday, September 12, 2017

Random Stuff (Pup)

This little guy is just a month and a half old, with the two snaps taken just a bit over a week apart, and the left most snap wasn't even when he was at its tiniest.

So far he's the cutest poop machine.
The poop. God damn the poop. And the pee! GOD!
.
.
.
.
I'm petitioning for putting you in diapers dude...





In other news, it's video games day today. So back to coding stuff.

Sunday, August 6, 2017

Feathered Journeys (mobile app)

Released a new mobile app (for iOS and Android) almost a week ago, for a non-profit organization that's for protecting birds and their environment.
The app has a trivia game revolving around the difficulties that birds may encounter when migrating, a compact encyclopedia about a number of birds encountered in Cyprus and the capability for the user to report the location of an illegal bird trap if encountered on the island.

iOS App store.
Android Play store.

Tuesday, July 4, 2017

6D Mark II thoughts.

My camera feels as if it's on its last ... clicks. So i was expecting to see how the new 6D Mark II would be when it came out. And it does look like a decent camera... except for the Auto focus system. Don't care about the 4K video not being a feature. Or that it doesn't have a CF slot or a headphone jack. That's for the upper class models.
But GOD DAMMIT Canon! Screwing up the AF system? The thing that allows us to focus when taking photos?

The autofocus system is supposed to be related to the one from the 80D model. But that camera is an APS-C, meaning it has a smaller sensor. If you take that system and put it on a full frame camera then half the sensor won't be covered, and since the autofocus points are in the center, that means that the sensor's edges aren't covered.

That isn't a big deal in every day shooting where you'll just aim with the center point and snap, but if you want to compose a moving subject to be at the edge then it will make it really hard.
This is compromising a dSLR camera's main job. To focus and compose a scene.