KEYNOTES

What do image generators know?

David A. Forsyth, UIUC

Abstract

Intrinsic images are maps of surface properties, like depth, normal and albedo. 

I will show the results of simple experiments that suggest that very good modern depth, normal and albedo predictors are strongly sensitive to lighting – if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo — with care, these are relightings of the same scene.  I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo.

But if an image generator can relight a scene, it likely has a representation of depth, normal, albedo and other useful scene properties somewhere.  I will show strong evidence that depth, normal and albedo can be extracted from two kinds of image generator, with minimal inconvenience or training data.   Furthermore, all these intrinsics are much less sensitive to lighting changes.  This suggests that the right way to obtain intrinsic images might be to recover them from image generators.  It also suggests image generators might “know” more about scene appearance than we realize.

About the speaker

I am currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where I moved from U.C Berkeley, where I was also full professor.  I have occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. I have published over 170 papers on computer vision, computer graphics and machine learning. I have served as program co-chair for IEEE Computer Vision and Pattern Recognition in 2000, 2011, 2018 and 2021, general co-chair for CVPR 2006 and 2015 and ICCV 2019, program co-chair for the European Conference on Computer Vision 2008, and am a regular member of the program committee of all major international conferences on computer vision.  I have served six years on the SIGGRAPH program committee, and am a regular reviewer for that conference. I have received best paper awards at the International Conference on Computer Vision and at the European Conference on Computer Vision. I received an IEEE technical achievement award for 2005 for my research.  I became an IEEE Fellow in 2009, and an ACM Fellow in 2014.  My textbook, “Computer Vision: A Modern Approach” (joint with J. Ponce and published by Prentice Hall) is now widely adopted as a course text (adoptions include MIT, U. Wisconsin-Madison, UIUC, Georgia Tech and U.C. Berkeley).  A further textbook, “Probability and Statistics for Computer Science”, is in print; yet another (“Applied Machine Learning”) has just appeared.   I have served two terms as Editor in Chief, IEEE TPAMI.  I have served on a number of scientific advisory boards.

Ethical, safe and inclusive XR technologies: the way forward

Abstract

Emerging technologies such as XR, Metaverse and AI are going to disrupt the educational landscape in the years to come. The ethical challenges and questions associated with integrating these technologies into everyday and educational practices are many and not very well understood. For example: Is it ethically acceptable to use simulated torture in virtual reality for teaching history and literature? Is it OK to resurrect people in Metaverse for educational purposes? Horizon CSA XR4HUMAN project seeks to identify these challenges and to create a European roadmap for ethical, safe and inclusive XR development and use. The talk with outline existing ethical, privacy and inclusivity challenges associated with development and use of XR technologies, based on the preliminary results from the XR4HUMAN project and illustrate several ethical dilemmas by raising highly provocative questions with corresponding demos (trigger warning).

About the speaker

Dr. Ekaterina Prasolova-Førland is Full Professor and Head of Innovative Immersive Technologies for Learning (IMTEL) research group and lab at the Norwegian University of Science and Technology. She has been working with educational virtual worlds and immersive technologies since 2002, with over 100 publications in the field. She has been involved in developing educational XR simulations for a wide range of stakeholders, including industry, hospitals, Norwegian Armed Forces and Labour and Welfare Administration. Ekaterina is Norway’s ambassador for Women in Immersive Tech.

Ekaterina Prasolova-Førland

Enhancing Realistic Rendering for Mixed and Virtual Reality Game

Esteban Walter Gonzalez Clua

Abstract

The video game industry continuously advances real-time rendering techniques, with an increasing focus on features like ray-tracing and global illumination. Additionally, VR/MR/AR games are pushing for high-quality rendering despite constraints such as high-definition displays (requiring many pixels), less powerful processors, and higher frequency requirements. This talk will present key optimization strategies, including hybrid denoising, foveated culling methods, optimization for foveated displays, and the usage of neural rendering approaches.


About the speaker

Esteban is Full professor at Universidade Federal Fluminense and coordinator of UFF Medialab, CNPq researcher 1D, Scientist of the State of Rio since 2019. He is undergraduate in Computer Science by Universidade de São Paulo and has master and doctor degree by PUC-Rio. His main research and development area are Real Time rendering, Digital Games, Virtual Reality, GPUs. He is one of the founders of SBGames (Brazilian Symposium of Games and Digital Entertainment) and was the president of Game Committee of the Brazilian Computer Society from 2010 through 2014. He is the general chair of the IFIP TC14 (Entertainment Computing). Esteban is also one of the founders of ABRAGAMES. In 2015 he was nominated as NVidia CUDA Fellow. Esteban is member of the program committee of most digital entertainment conferences. Esteban has 66 journal papers and 224 conference papers published up to now. In 2024 he is the Program chair of the ACM High Performance Computing and General chair of the IFIP International Conference on Entertainment Computing. In 2023 Esteban received the SBGames Award for his life career.

From Augmented Reality to Augmented Intelligence: The Future of Spatial Computing and Behavior Analytics

Abstract

In her keynote at SVR 2024, Veronica Teichrieb will explore the transformative journey from Augmented Reality (AR) to Augmented Intelligence (AI), emphasizing the pivotal role of spatial computing and behavior analytics in this evolution. The talk will delve into how AR, traditionally focused on enhancing user experiences by overlaying digital content onto the real world, is progressively merging with AI to create more intuitive, intelligent systems capable of understanding and predicting human behavior. Through case studies and recent advancements from Voxar Labs, Veronica will demonstrate the potential of these technologies to revolutionize various fields, from healthcare to education, and discuss the challenges and opportunities that lie ahead as we move towards a future where virtual and augmented realities are seamlessly integrated with cognitive computing, driving the next wave of innovation in immersive environments.

About the speaker

Veronica Teichrieb is a distinguished professor at the Federal University of Pernambuco (UFPE), where she leads research in the field of Computer Science with a focus on Virtual and Augmented Reality, 3D Interaction, and Spatial Computing. She holds a Ph.D. in Computer Science from UFPE, with a doctoral research period at Aero-Sensing Radarsysteme GmbH, Germany. Veronica has significantly contributed to advancements in processing graphics, interaction technologies, and behavior analytics in immersive environments. Her work has been recognized internationally, with numerous awards in competitions related to Virtual and Augmented Reality. She currently directs the Voxar Labs, a leading research group in these domains, and is a key figure in exploring the intersection between spatial computing and behavior analytics, aiming to shape the future of augmented intelligence.

Veronica Teichrieb