visageSDK is a Software Development Kit - a set of documented software libraries that software developers can use to integrate Visage Technologies' face tracking, analysis and recognition algorithms into their applications.
No. visageSDK is not a finalized application, but a Software Development Kit that you use to develop/integrate your application. This is what your software development team can hopefully do, based on the documentation delivered as part of visageSDK.
Visage V1.2 License Key
the version of visageSDK (you can find that information in the upper left corner of the offline documentation, as depicted in the image below. The offline documentation is found in the root-folder of visageSDK packages as Documentation.html).
Regarding GDPR and privacy issues in general: visageSDK runs entirely on the client device and it does not store or transmit any personal information (photos, names, or similar), or even any other information (other than Visage Technologies License Key File, for licensing purposes).
Integration with Unity 3D game engine is available with visageSDK packages for Windows, iOS, Android, MAC OS X and HTML5. Each of these packages includes Unity sample projects to help you get started. Unity 3D integration is implemented via the VisageTrackerUnityPlugin plugin. This is a wrapper that exposes visageSDK functionalities for use in C#.
Additionally, we provide VisageTrackerUnityPlugin, which is a C# wrapper made specifically for integration with Unity 3D engine. For more information, please take a look at Windows: Unity3D. The VisageTrackerUnityPlugin comes included with visageSDK for Windows, MAC OS X, iOS and Android.
visageSDK is implemented in C++. It provides C++ and C# APIs, but unfortunately currently does not include a Java API. visageSDK for Android includes JNI-based Java wrappers as part of the sample projects provided in the package. You can use these wrappers as a starting point for your own projects in Java. In conclusion, it is certainly possible to use visageSDK with Java, but it requires some additional effort for interfacing via JNA wrappers.
visageSDK is implemented in C++. It provides C++ and C# APIs. On Windows, the C# API is implemented as a C++/CLI library (VisageCSWrapper.dll) which, in theory, should be usable in other NET languages, e.g. VB.NET. However, we have not tested this. The C# API documentation may be used as a basis for using VisageCSWrapper C++/CLI library in VB.NET.
visageSDK is implemented in C++ and provides C++ API which cannot easily be used directly in Python without a C functions wrapper. visageSDK provides such a wrapper in the form of VisageTrackerUnityPlugin which was made specifically for integration with Unity 3D engine. However, it can also be used by other applications/languages that support importing C functions from a library. At its core, the VisageTrackerUnityPlugin is a high-level C functions API wrapper around C++ API. In the case of Python, ctypes library (foreign function library for Python) can be used to import and use C functions from VisageTrackerUnityPlugin. As the source code is provided, VisageTrackerUnityPlugin can also be used to implement a custom Python wrapper.Even though it was tested, such usage of VisageTrackerUnityPlugin is not officially supported.
visageSDK is implemented in C++ and provides C++ API, therefore direct calls from React Native are not possible without a wrapper with C-interface functions. An example of such a wrapper is provided in visageSDK in the form of VisageTrackerUnityPlugin which provides simpler, high-level API through a C-interface. It is intended for integration with Unity 3D (in C# with P/Invoke), but it can also be used by other applications/languages that support importing and calling C-functions from a native library, including React Native.
For users willing to experiment, there is an undocumented workaround that allows the use of visageSDK in Flutter.visageSDK is implemented in C++ and provides C++ API. Therefore, direct calls from Flutter are not possible without a wrapper with C-interface functions. An example of such a wrapper is provided in visageSDK in the form of VisageTrackerUnity Plugin which provides simpler, high-level API through a C-interface. It is intended for integration with Unity 3D (in C# with P/Invoke), but it can also be used by other applications/languages that support importing and calling C-functions from a native library, including Flutter.
Postman is a scalable Web API testing tool that quickly integrates into CI/CD pipeline. As visageSDK does not provide an out-of-the-box Web (REST/SOAP) API it is not possible to directly call to visageSDK from Postman. To use Postman you would first need to implement a Web API on your own server using visageSDK for Linux, Windows or HTML5 depending on your choice of server OS.
We believe that it should be possible to use visageSDK in WebView, but we have not tried that nor have any clients currently who have done that so we cannot guarantee it. The performance will almost certainly be lower than with a native app.
30 May 2020 visageSDK 8.4 for rPI can be downloaded from the following link: -rPI-linux_v8.4.tar.bz2Once you unpack it, you will find the documentation in the root folder. It will guide you through the API and the available sample projects.
The package you have received is visageSDK 8.4. The latest release is visageSDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting, we will need to discuss the possibility to build the latest version on-demand for you. visageSDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.
09 Mar 2021 From visageSDK 8.7 Stable version it is no longer possible. However, the HTML5 libraries can be included in Node.js projects or any JavaScript framework using the script element ( -US/docs/Web/API/HTMLScriptElement#dynamically_importing_scripts), taking into account the order of loading scripts described on the API page.
Yes, visageSDK can be used with any IP camera. However, note that visageSDK actually does not include camera support as part of the API; the API simply takes a bitmap image. Image grabbing from a camera is implemented at the application level.
The mentioned camera parameters (1920 x 1080, 30 fps, mono) should be appropriate for our software and most of the use cases. With these camera parameters visageSDK can provide sustainable tracking up to 5 meters.
There is no ready-made application to do this, but it can be achieved by modifying the existing sample projects. Such modification should be simple to do for any software developer, based on the documentation delivered as part of visageSDK. We provide some instructions here.
To get you started, each platform-specific visageSDK package contains sample projects with full source code that can be modified for that purpose. See the Samples page to find the documentation of the specific sample projects.
The online demo is based on visageSDK for HTML5 and has some limitations due to the HTML5 implementation. For even better performance, you may want to download other visageSDK for Windows, Android or iOS - each of these packages contains a native ShowcaseDemo in which ear tracking can be tried.
If you are already developing using visageSDK and want to enable ear tracking, you can use the ready-made configuration file Facial Features Tracker - High - With Ears.cfg. Ear tracking is enabled using the refine_ears configuration parameter.
There is no ready-made application to do this, but it can be achieved by modifying the ShowcaseDemo sample project in visageSDK for Windows. Such modification should be simple to do for any software developer, based on the documentation delivered as part of visageSDK and the instructions below.
Saving analysis data to a file can be achieved by inserting an export function to the VisageSDK::VisageFaceAnalyser functions. In visageSDK\Samples\OpenGL\build\msvc140\ShowcaseDemo\ShowcaseDemo.xaml.cs you will find gVisageFaceAnalyser::estimateEmotion(), ::EstimateAge() and ::EstimateGender() in the AnalyzeFace() function. The gEmotionFilterList[index], gAgeArray[index] and/or gGenderArray[index] are the raw results that can be saved to a file.
visageSDK includes active liveness detection. The user is required to perform a simple facial gesture (smile, blink, or raise eyebrows). Face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As the app developer, you also need to take care of displaying appropriate messages to the user.
Here is a code snippet used to roughly calculate screen space gaze by combining data from visageSDK and external information about screen (in meters) and camera relation which can be used to determine if the user is looking at the screen or not (if calculated coordinates are outside the screen range):
visageSDK can be used to locate and track faces in group/crowd images and also to perform face recognition (identity) and face analysis (age, gender, emotion estimation). Such use requires particular care related to performance issues since there may be many faces to process. Some initial guidelines:
visageSDK is capable of detecting/tracking faces whose size in the image is at least 5% of the larger resolution i.e. image width in case of landscape images and image height in case of portrait images.
The tracker of visageSDK has a configuration parameter called recovery_timeout. You can adjust this value to your liking, and the tracker will assume (without Face Recognition it cannot know) that any face that reappears in frame within that recovery timeout is the same person it was tracking before.Adjusting this value can help accommodate temporary disruptions in tracking. However, there is the risk that if a person leaves and the time window is long enough, a new person can walk into the frame and be tracked under the assumption of being the same person. 2ff7e9595c
Comments