Tutorial Facegen Model Ler 15
LINK >>> https://shoxet.com/2t7dp6
I think you can import the FG Modeller obj as a waredrobe obj item and then carefully position it and use the DS 'fit the Genesis model head area' to the FG Modeller OBJ. You'd then want to export the Genesis model to Blender or something and sculpt it some more to more closely match the FG Modeller obj.
And trust me, its easier that way. I'm sure you might be able to make something work with just the modeler, but it will be a tedious and painful experience, LOL. With Facegen Artist it is one click to export to whatever Genesis 1,2,3,8 model you prefer. Facegen Artist has a free demo at the official facegen website. The demo is limited to export only G1 and G2 heads, and it will brand them with a "FG" on their forehead. But you can see what the app can do without having to invest in the paid version, and again, if you save the faces as .fg files you can load them up in the paid version or any other version of FaceGen. Give it a try.
If you are taking an obj and just applying it to a Daz figure, this will create a whole new mesh obj and not be recognized by Daz as a Genesis model. You would have to rig it. This can be done by using the Transfer Utility for a simple rigging. The figure will now pose, but the head will likely have issues trying to do any expression. To get better rigging you would have to manually rig the face, and that is not an easy task by any measure. There are tutorials for this.
So the zBrush sculpting Genesis model to fit the the FG Modeler obj should work very similar to the Poser 'paint a morph' onto a Poser model from another Poser model or obj file, if you've done that before.
Facegen uses its own head model, so the verts and polys won't match the Genesis head.The more that I think about it, vertex projection is probably the wrong way to do it. The Facegen and Genesis head models are too different. I think Facegen is marking facial points on the Genesis model, which they have in common with their own head model, and then they run all vertices through an iterative solver or a morphing "black box". That's why Facegen unintentionally warps the eyes and jaw. So, in other words, they're transferring facial feature rules only, like the distance between the eyes, and then solving for those rules.
Facegen uses its own head model, so the verts and polys won't match the Genesis head.The more that I think about it, vertex projection is probably the wrong way to do it. The Facegen and Genesis head models are too different. I think Facegen is marking facial points on the Genesis model, which they have in common with their own head model, and then they run all vertices through an iterative solver. That's why Facegen unintentionally warps the eyes and jaw. So, in other words, they're transferring facial feature rules only, like the distance between the eyes, and then solving for those rules.
This is a tough one, but not impossible. You will have to use python or another compatible scripting language to do this quickly, but you can do this by hand and slowly if you're not practiced at programming. The idea here is that you will not be exporting your native model, but a base model that has been fitted to your form using a difference morph creator (which calculates the differences and generates body and face morphs accordingly), or using a hand tool to create each morph on the base model to get it to match your native model. Once you've converted them, you can save the model and await the overlay of the face. At this point, if you haven't downloaded the faceset for facegen that matches the base model, you should do so. Install the base set into facegen and open it. Now you can create your new face in facegen, using whatever method you find capitol among its toolset. Now you should be able to export the face to DAZ studio. Technically, the Artist version of facegen has the ability to do this directly, but you can utilize the same tools I describe above to match your native face to the modelled face, and tie it all up into a nice morph set.
Download the base model face set from Facegen that matches your base model in DS. Using this set, you should be able to get a face design from the Facegen app. There are two ways this can work from here on. If you have the Artist version of the Facegen software, you will have the ability to export to a DS compatible format, and it should work as a morph for the base face. If you have the Modeller set, you have access to the capability, but not so directly. You can download the artist version and run a fully functional trial (no watermark) in order to export the face, but you may have to reinstall every time, and you'll have to mess with your registry values to wipe a few things. Not fun. If you have both versions as a package deal, which they have pushed in a few instances, you should be alright, create the face in modeller, open it in artist and export to DS. If neither of these fits you, you'll export something you can import into DS as an object or garment, then you'll either use a script to map the morphs for the base face, or you'll morph it yourself. Technically, the object should contain the same mapping information as the base face, but the sizing measurement standards may be different. If so, you can also reset those for the object in hexagon or other software, then copy them and paste them over the mappings of a copy of the base face, save that from hexagon to ds. This can get slow, but it can work.
Personally, I prefer Reallusion software to facegen. They have an addon that allows you to import and export to different modelling software and they can mold their power to your pipeline. Facegen is more for the still life, and can be very helpful when creating still imagery or concept, but other than that, it just has too many limitations.
In this light, it is important to recognize the development of face-modelling techniques that allow for a single image of a face to be modelled onto a three-dimensional average face, and then for the modelled face to be used to generate novel images. The three-dimensional morphable face model (Blanz & Vetter, 1999, 2003)1 was constructed on the basis of laser scans from 100 males and 100 females, with each scan representing two different kinds of information (see, O'Toole, Vetter, Troje, & Bülthoff, 1997). The two kinds of information represent the three-dimensional head surface data and the texture average (sometimes referred to as a two-dimensional reflectance map). An average of each dimension was then computed, and every face was coded upon a continuous scale, which represents deviation from this given 3D and texture average (for a more in-depth discussion on how the two kinds of information are computed, see Blanz & Vetter, 1999; Vetter & Poggio, 1997). Principal component analysis (PCA) is then conducted to find the eigenvectors allowing a new range of faces to be synthesized. What this method of construction (and others like it) enables is for each face to be rendered under clearly defined lighting conditions or views (for a detailed discussion of PCA and generated faces, see Hancock, Burton, & Bruce, 1996; Vetter & Walker, 2012).
4.1. Intelligent Control System. If a robot wants to show its ownfeelings in a moment, it needs to analyze not only the external environmentand its own emotional state, but also signal of human language, bodymovements, and facial expressions and this requires modeling of the emotionalstate of the robot. As shown in Figures 2 and 3, we devise the controlarchitecture of robot head, use computer vision and sensors to getinformation from human, and send it to three databases. After operationprocessing, the system could make decisions and generate homologous orders;finally we use FACS and emotional speech synthesis to show us humanoidfeelings. 2b1af7f3a8