How to determine the correctness of an ONVIF implementation? Can test tool alone be the judgement of such correctness?
The correctness of an ONVIF implementation is multi-aspect and it cannot be judged alone by test tool because (1) test tool can contain bugs(2) test tool does not cover all testing scenarios. This is the root cause of the industry reality that many ONVIF implementations doesn't work together well, but they can all pass test tool.
According to the ONVIF testing specification document "ONVIF_Test_Specification_v13_06.pdf", in "4.Scope" it says "This ONVIF Test Specification does not cover the complete set of functions as defined in [ONVIF Network Interface Specs]; instead it covers a subset of it:". This simply says that the official testing specification does not cover all scenarios and therefore is incomplete. In the same document, at "5. Requirements for respective test cases", it says "This document defines whole requirements for claiming conformance to the ONVIF standard which includes every aspect of each requirement in the above documents. ". This means under the official standard, one can claim ONVIF conformance by merely passing those testing specification. Using deductive logic by combining the two points, one can get the conclusion that "claiming ONVIF conformance does not automatically guarantee the completeness of an ONVIF implementation".
This does not say that ONVIF official testing specification lacks completeness, but merely pointing out a fact that there can hardly be any automated testing process that can achieve true completeness. If one really wants to do "complete testing", it might only cost too much beyond reasonable economic cause. In contrast, if we take a step back, you'll see that users do not demand real "completeness", they just want components to work together in a reasonable manner, because in real world, demanding 100% of anything is almost not possible all the time. Thus, one must take "human knowledge" into consideration to determine what device or technology is "usable & mature", and not rely such determination on an automated testing tool.
The best proof of the reasoning above is that the test tool cannot determine if the video image correctly reflects the user configuration change. For example, if we have a test specification like the following one:
- Change the configuration of a camera in resolution, frame-rate, bit-rate, brightness, saturation, and hue, then check to see if the camera output video indeed conforms the requested configuration change.
You will find that this test specification is not covered by the ONVIF test tool. Why is such an obviously essential test case, as this example, not covered by the testing tool? Educated guess is that since the only official way to verify ONVIF conformance is to use test tool, then it means that all testing specification must be able to be programmed and verified using the testing tool. However, "to determine if the camera video output conforms to a configuration change" seems to fall beyond the comfort range of current computer programming technology (meaning it could be done, but might come with extreme cost that render it impractical). In conclusion, using the technology readily available today, human intervention to ONVIF verification is still the best and most practical way. Nonetheless, human intervention inevitably introduces subjective factors, so it's not easy to write down a specification for that.
In conclusion, the correctness of an ONVIF implementation is judged by reading the ONVIF specification (not testing specification). ONVIF specification is a "set of documents", and together they form a "spec set with version". Both sides of integration (NVC and NVT) shall use ONVIF specification set to communicate standard conformance issues. Currently Genius Vision implements ONVIF using ONVIF spec set version 2.3. Please note this "version 2.3" is not entirely the same meaning as the commonly referred "ONVIF 1.0, 1.2, 2.0".