This month, we're putting the spotlight on Sam Al-Mutawa, one of the original members of our Customer Experience (CX) team. Sam talks about his role, what he's excited for in 2018, and his favorite buildings and spaces. If you want to read some of Sam's work, check out his blog post on Daylighting Studies in Prospect.
This month, we're putting the spotlight on Ailyn Mendoza, our Director of Customer Experience. Ailyn talks about her role, her favorite part of her job, and her plans for 2018 - which include visiting Game of Thrones filming locations!
This month, we're putting the spotlight on Robin Kim, one of the Software Engineers on our Cloud team. Robin has been at IrisVR for over two years and shares his thoughts on what's changed, what it's been like to watch the company grow, and what he's excited for in the coming year.
In honor of National Intern Day 2017, we're putting the spotlight on George Hito, our software engineering intern! George hails from Chapel Hill, North Carolina, and attends Dartmouth College.
Update from 2018: George has joined the team full-time!
Jack Donovan has been with IrisVR since the beginning - and a lot’s happened since his last Employee Spotlight in August 2014. Jack shares his thoughts on what’s changed at IrisVR over the past three years, what it’s been like to watch the company scale, and what he’s excited for in the coming months.
Today's Hack Day took a turn for the dark and spooky with Jack, Amr, and Greg converting one of our in-house architectural models into a VR-ready haunted house. We realize that not all of you have VR headsets (yet), so we're also releasing a non-VR version that works with your old-school computer monitor.
My name is Greg Krathwohl. I graduated from Middlebury College this spring, majoring in Computer Science and Economics. Before coming north to Vermont, I grew up in Ipswich, Massachusetts. I enjoy coding, running, adventuring, and making maps. Every day at IrisVR, I’m learning more about architectural modeling and 3D graphics, but my first interest in stereoscopic 3D started about 10 years ago, when first discovered the Magic Eye books. I quickly mastered the technique of diverging my eyes to see the magical 3D image, and began to experiment with how they worked, creating my own little scenes in Microsoft Paint. Since learning about how stereo vision works, I started taking 3D pictures - a left image, and a right one a few inches away. To see the full effect, put the images next to each other and diverge your eyes in the same way that you view a Magic Eye. I was anticipating the day when we had the technology to revisit these scenes without this headache inducing technique. I was first introduced to programming at Middlebury. I was fascinated by how coding could create anything. I learned how computer vision could be used to identify edges in a image, or find shapes, or pick out objects. Or, most amazingly, how multiple views of an object could be used to recreate its 3D geometry. I spent last summer assisting research for Professor Scharstein, known in the world of stereo vision for his stereo vision benchmarks. We worked on capturing scenes (random objects placed on a table) to create high resolution depth maps.
My name is Jack Donovan and I graduated from Champlain College with a degree in Game Programming. I’ve been fascinated with virtual reality and augmented reality since my first Virtual Boy (a 1995 two-tone Nintendo VR console) and I’ve been growing as a programmer ever since. I make non-VR games too; I co-founded an independent game studio incorporated in Vermont called Team Aurora Games and I wrote a book called OUYA Game Development By Example, released by Packt Publishing. My first task at IrisVR was what would evolve to become the bread and butter of our automation process: reading an exported .OBJ file and generating a 3D mesh based on its geometry data. It was a primitive torus shape, something like a donut, and despite its simplicity, seeing it generated on screen properly was exciting. That prototype is a little dated now that our algorithm is able to load in a model of the entire Empire State Building, but I won’t forget that first task as the jumping off point that got me started coding and learning more about virtual reality and procedural mesh generation.