
Meta smart glasses are going from niche to norm, so what happens when people start wearing them in the office, asks Paul Armstrong
Meta’s recent leaked facial recognition memo revealed more than poor judgement. Internal discussion described launching facial recognition on Ray-Ban smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”. Many shrugged and filed it under “Meta being Meta”, but the reality is the hardware is scaling, and most businesses haven’t begun to think through what that means inside their own walls.
7m Ray-Ban Meta smart glasses were sold in 2025, and part Meta-owned Essilor Luxottica plans to increase production to between 20m and 30m units annually by year end. At that level, adoption stops being niche and starts becoming ambient. Cameras mounted at eye level begin to look ordinary, and microphones embedded in frames fade into the background. Offices, trading floors and factory environments absorb constant recording whether governance frameworks exist or not. A handful of employees in most medium and large organisations are almost certainly already wearing these devices at work, yet many firms still lack clear policies covering what can be recorded, who can be identified and where the data ultimately resides. Shadow IT began with unsanctioned software slipping through the browser, and the next version will be on faces.
The issue isn’t just identity and recording, neither is the technology speculative. Harvard students demonstrated in 2024 how Meta glasses could be hacked to create I-XRAY, identifying strangers in under two minutes and pulling names, addresses and other personal details from public databases using off-the-shelf components. Chinese police are already using augmented glasses for identification and traffic enforcement. UK police continue expanding live facial recognition in public spaces. Distribution is happening publicly and privately.
The risk for workplaces
Scale changes the economics of risk before policy catches up. Meta’s Q4 2025 revenue reached $59.89bn and full year revenue hit $200.97bn and could be considered ‘eff-EU’ money. Paying fines has become business as usual for big tech, and Meta, like most others, is a long-time offender. Facebook agreed to a $650m biometric privacy settlement in Illinois. Nine-figure settlements don’t function as deterrents; they read as a cost of doing business. For companies operating at that scale, fines can be absorbed and modelled. Europe’s AI Act allows fines of up to €35m or seven per cent of global turnover for certain violations. Serious language backed by serious sums, but enforcement moves more slowly than product cycles. By the time regulators intervene, tens of millions of devices may already be embedded in daily routines.
Incentives make the direction clear without requiring conspiracy theory. Hardware partners need units on heads to justify manufacturing capacity. Platforms need streams of data flowing into their models. Investors expect growth curves, not restraint. Regulation typically follows distribution rather than stopping it. Exposure then shifts from the platform to the enterprise. When employees become nodes in a sensing network that no board formally approved, legal and reputational risk sits with the firm using the device rather than the company that built the device. Insurance policies written for laptops and phones may not extend to continuous recording on privately owned glasses. Employment contracts rarely contemplate biometric capture between colleagues. Consumer terms frequently grant broad rights over uploaded content. A confidential client meeting recorded through smart glasses and synced to a third-party cloud service presents a totally different risk from an unauthorised SaaS subscription.
Productivity will drive adoption
Productivity will be the force that drives adoption, and the case studies are accumulating. Manufacturing firms deploying smart glasses for hands-free documentation report efficiency gains of 20-30 per cent in complex assembly alongside measurable reductions in error rates, while energy operators describe shorter training cycles and healthcare teams cite improvements in documentation accuracy and workflow consistency. In complex, safety-critical environments those gains are not cosmetic; they translate into fewer mistakes, faster onboarding and tighter operational control. Boards will feel pressure to test the technology precisely because the commercial upside is tangible rather than speculative.
The risk lies in assuming productivity and surveillance travel separately. A device guiding a technician through a repair also records the whiteboard behind them. A camera supporting remote maintenance also captures negotiation dynamics and sensitive financial information. Emotional analysis software already exists commercially; pair inference engines with always-on cameras and monitoring shifts from visible oversight to continuous inference. Every employee wearing smart glasses becomes a potential data source, and every meeting becomes something that could be stored, analysed or reused. Blocking a browser tab is pretty straightforward, but regulating a face might(!) have an effect on workplace culture.
Boards should treat smart glasses as infrastructure rather than as a gadget trial delegated to IT. The real decision isn’t whether the workflow gains look attractive in a demo, but whether identification features that add marginal benefit justify the exposure created. Recording boundaries in deal rooms, client briefings and HR settings need to be set before adoption hardens, insurance cover needs to be tested against biometric and continuous capture risk and consumer device terms need to be understood before those devices enter secure environments. Procurement discipline no longer stops at software; it’s now shaping behaviour.
Smart glasses won’t wait for governance frameworks to mature, and the companies selling them have balance sheets capable of pretty much absorbing whatever fine follows. Responsibility will sit with the businesses that allowed the devices inside. Platforms can price penalties into growth, but most organisations can’t afford to learn, too late, that their own people became the data infrastructure for someone else’s model.