The Harvard students who added AI facial recognition to Meta glasses share some of their other big ideas
The Harvard students who added AI facial recognition to Meta glasses share some of their other big ideas
- The two Harvard students who put facial recognition AI in Meta's Ray-Ban glasses have big ideas.
- The duo, AnhPhu Nguyen and Caine Ardayfio, are known for their innovative tech projects.
- They plan to apply AI advancements to construction, enhancing robotics with reasoning.
Two Harvard students spooked the world with their demonstration of facial recognition using Meta Ray-Bans. Now, the duo says they have big ideas for AI in the construction industry.
AnhPhu Nguyen and Caine Ardayfio made headlines this month for their I-Xray project. They posted a demo that showed their software could scan video taken by the glasses and instantly identify people by searching the internet for their personal information.
But Nguyen and Ardayfio told Business Insider it wasn't the first high-tech gadget they created. They founded the AR/VR club at Harvard and quickly got to work on real-world projects during their sophomore year. They are now juniors.
"We lived in the science and engineering building for like an entire summer and just built random projects," Ardafiyo said.
"The first one was the flamethrower, but then we built an electric skateboard that you can control with your fingers," Ardafiyo said. "We built a tentacle, like a robotic tentacle, that was like four feet long and could move through the air."
It was through the AR/VR club that they gained access to the Meta glasses. They also built AI into augmented reality glasses that can fact-check a user's statement during an argument in real time.
Related stories
View this post on InstagramA post shared by Anhphu Nguyen (@anhphunguyen5)
Nguyen said he has "always been prioritizing projects a bit more than, like, my GPA because I don't really want to get into grad school right off the bat."
Ardafiyo and Nguyen have big ideas for AI products in manufacturing, construction, and industrial tech spaces, they told BI.
Advancements with AI large language models like OpenAI's ChatGPT and Anthropic's Claude over the past year have allowed more rapid innovations "in a way that you haven't been able to before," Ardifiyo said.
For example, if an autonomous construction robot trying to dig a hole was blocked by a person standing in the way, it would previously have needed an engineer to hardcode every one of its movements to avoid the person, Ardifiyo said.
But a construction robot with LLM reasoning can say to itself, "I'm going to wait for this person for a couple seconds until they move, and then if they don't, then I'll do this, and you can have a very strong likelihood that whatever it does is pretty reasonable," Adifiyo said.
"That's actually, like, very incredible, and a capability that has only existed in the past six months, essentially," he added.