![]() In both the privacy policy of Oculus and HTC, makers of two of the most popular VR headsets in 2020, the companies are permitted to share any de-identified data. It is standard practice in releasing research datasets or sharing VR data to remove any information that can identify participants or users. The most pressing class of issues falls under the process of de-identifying data. If tracking data is by nature identifying, there are important implications for privacy as VR becomes more popular. For example, the study has over 60 participants over the age of 55, and the movement of this group is likely different from a typical college student. Second, different people likely have different types of body movements. First, identification in small samples likely overestimates the diagnosticity of certain features given the lack of overlap in body size and other sources of variance. ![]() Both features of the sample are relevant theoretically. Moreover, the current work is unique in that it uses a very large (over 500) and diverse sample primarily from outside a university. In fact, the tracking data we use is from a study 7 designed with the intention examining the associations between motion, self-report emotion data, and video content. In contrast to previous work, which has focused on designing VR tasks to identify or authenticate users 4, 5, 6, we begin with a task that was not designed for identification. The tracking data VR provides can be identifying. This data is a problem for users concerned about privacy. While less common, some can track feet, chest, and even elbows and knees to increase immersion 3. All VR systems measure head orientation, most measure head position, and many measure hand orientation and position 2. In order to render the virtual world from the viewpoint of the user, the user’s position must be calculated and tracked. Then doing a 1K Irredecence or whatever it is light data set of frames for the same time length but way small… like 25% final resolution and probably start with 12fpsīlend and super sample the extra light data, then merge it with my standard video on the GPU.Virtual reality, the use of computational technology to create a simulated environment, has spiked in use in recent years 1. The totally crazy idea I’m cooking up now is to render a standard format (8bit) video in regular web compatible formats and put that in the background. So if your animation is super short, you could potentially do a flip book… One idea I had was to output frames into a TextureAtlas which super recently got KTX compression support in three… There ARE tools to take 8bit images and do a decent job upscaling to 16 or even 32 but I don’t know enough about them. So off the bat, I’m thinking we’ll probably be pretty alright using standard videos, and it’ll be a Video background so it would probably cover the shortcomings (untested assumption) When running real-time, often 1K or 2K HDRIs are more than enough, and you can actually generate the light in its own weird texture. ![]() If we go into it knowing we don’t need Pixar level frames, we can try things more approachable and potentially viable. So if you have $9,000 you could get a license and potentially make that work… The Game Industry Standard Codec Bink is actually recently supporting full HDR (16bit) That’s kinda where im stuck now in finding output options that are viable. Any decent codec like 264 or even A1 seems to only work in 8bit. The next problem is transmitting the file. A 2 minute uncompressed HDR 360 video I have is 17GB.īut even Adobe was only able to get that in 2016. ![]() They also demand crazy high resolution, which only multi-cam 360 rigs can attempt…ģ60 videos are already huge. So while a PNG in 8bit colors looks great, it doesn’t have the full picture of how the light was.įor sites like PolyHaven who have strict standards, the requirements are full 32bit color, which is what you’d use in a video editor or modeling software. However, if you need to render a real location or event using video data… you’re going to have a bad time…Ī key thing is to make Image Based Lighting, the HDRI needs to have a fuller spectrum of color/light information. If you’re trying an effect or something, that would be the way to go… R3f has great resources on how to use the real-time scene directly as an envmap. I know this is old but I’ve been trying to solve this as well… ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |