QuenchTec has developed a new integration with Medallia’s LivingLens solution offering our survey users a new integrated access to a comprehensive video analysis.

As part of our ongoing product development efforts and through increased demand by our clients our Head of Solutions Bjorn Gronli, together with the development team, has been testing several options for a comprehensive and automated video analysis capability.  So far QuenchTec has previously assisted its clients in several projects involving video as part of a survey whereby respondents can upload files, photos and videos.  Respondents can capture a video on their mobile as part of a “normal” survey.  We have also conducted some pilot projects to do transcription and sentiment analysis of the videos using Microsoft Azure Cognitive Services – a service which is an integrated part of our normal infrastructure.

There are many providers of more advanced video analysis solutions, including reporting, available in the marketplace.  One of these is the Medallia LivingLens solution.  This solution has an API for integration and offers several options on how the video recording can be conducted and how to attach other information to the uploaded video. These options include the minimum and maximum duration of a video, the possibility to associate information from the survey like gender and age, and other given answers which in turn can be used to assist in the analysis on the LivingLens platform.

To showcase the adaptability of our iQuests (interactive questions), QuenchTec recently conducted a project with a client – using this solution – seamlessly integrated with our Survey platform.  With this new iQuest-solution, we have created a simple user-interface that all our Survey users can utilise enabling them to take advantage of all the features offered by LivingLens.

The current iQuest has exposed these features:

Our created iQuest solution makes it easy for any survey author creating a normal survey on our platform to include LivingLens videos.  No programming skills are needed.

Settings are categorised as follows:

  • Component settings:  How the video capture component should behave e.g. the size of the video window, the minimum and maximum length of the recording, the language to be used in the controls (LivingLens supports more than 25 languages), hiding the next button until video has been captured etc.
  • Upload settings:  e.g. a description to appear in LivingLens, the ID to associate with the upload (respondent or alternate id).  Not least, the possibility to list the responses from any questions that you want to upload together with the video.  This information will then automatically be available to filter and analyse the videos in LivingLens.
  • General settings:  e.g. the country and language to be used for automatic transcriptions (LivingLens support about 50 languages).

LivingLens will then automatically transcribe all the videos and do the sentiment analysis. Possible options are also object and facial recognition.  It allows you to create “showreels” to share with your end-clients, offering numerous editing and analytics functions.

The ease of use of the iQuest integration, and in particular the simple way of including any answers from the survey to be automatically available in the analysis on the LivingLens platform, makes this a compelling combined solution for advanced video analysis.

If you are interested in exploiting this solution, please get in touch with your contact person at QuenchTec – or drop us an email at “sales@quenchtec.com“.