There seem to be a lot of technologies recently to create new motions by extracting motion from human motion. (vid2vid, vid2game, pose2pose)
Vid2Player, which was researched at Stanford University, uses actual tennis rally video data to create the position and motion of the tennis players to strike the ball according to the position of the ball.
They used CycleGAN for image-to-image translation to pre-process the structure of a tennis rally, such as'motion-hitting the ball-returning', and pre-processing shadows or cutouts in the data.
The source code is not public yet, but you can see interesting demo results such as Federer and Federer rally on the project site.