We are excited to announce our latest improvement to our federated manual search – an AI computer vision integration which can understand manuals and include them in the materials it searches through and returns to you, the user.
Including Diagrams in Federated Search
As a crucial component of manual’s content, diagrams are invaluable to include in the search process and present as part of the answer to the user. We implement this with a GPT-based image-to-text model. Here’s how it works:
Processing a new manual:
Processing a new query:
We value versatility, and, like most of our implementations, our approach is model-agnostic: whatever the leading image-to-text model out there is can be plugged into our codebase, out of the box. Likewise, it does not matter what the format of the image is in the manual. Partners also have full control over how detailed the model should be in its conversions and whether it should pay attention to anything in particular during new manual processing.
We look forward to continuing to develop the ability of our framework to master the complexity of manuals and offer an effective copilot in repair jobs.