All we know is the idea of augmented reality from games, science-fiction films and AR-apps for smartphones. Essentially, it is about extending the original content with additional visual elements (known as “augments”). The position of augment and its life-cycle may be anchored to its target content object or the view.
Content Type: video stream, street view, geo data Augments: map, navigation widgets Anchoring: geo points
The most challenging process is a parsing of the raw source data into the structured content and identifying its content objects (like geo points in the example above). Then we need to embed augments into the view at the right place.
In the example above the web page is the content. Its elements are content objects. Embedded buttons and other widgets are augments. Let's look into the social feed page and possible augments for it:
- Exchange rates widget;
- Custom filters controls;
- Personal ads;
- Crypto accounts linked to social profile (connected accounts data);
- Additional inserts;
- Additional buttons;
- Profile badges;
- New functional starters;
- Background themes and colors;
- Hidden and paywall content;
- Community stickers;
- NFT profile pictures;
- Visual frames and selections;
- Community post from other social media
- and many others.
Parsing of the content means parsing of the DOM, and embedding means inserting a widget's HTML into it. All augmentation widgets are organized as AugmentationDOM, which lays over the original content’s DOM.
A web page can be still quite dynamic: content objects may appear and disappear at any time or move around the page (i.e. by dynamic loading on scrolling). Augmentations should follow their linked content objects.
Although the implementation of the Dapplets Protocol is heavily optimized for the web, the protocol itself is web agnostic and was developed with a variety of content types in mind.