Japan's Virtual Human Project, Saya Project. It's in Japanese, so I couldn't understand all the progress, but the visual completion was quite high and the expression was natural. In the future, it is said that it plans to add functions such as adding human emotions, behavior recognition, and conversation...
I think that in order to create a human AI, various parts such as appearance, voice, dialogue, and emotion must be integrated, but the appearance/voice part of them seems to have reached a certain level in both completeness and personality. However, it seems that the direction to go is not clearly visible in the part that is still in charge of thinking. For example, I remember that everyone in the industry (?) was surprised by the recent training data and model size of GPT-3, but I am wondering if I should go in this direction.