|Article by Edward Au|
As of today, there are quite a number of active standardizations projects and pre-standardization efforts (via the concept of Industry Connections) with focus on virtual reality (VR) and augmented reality (AR), three-dimensional body processing, and healthcare .
At the September 2019 IEEE-Standards Association (IEEE-SA) Standards Board Meeting, the Board has approved 4 new standards initiatives, namely P2830, P2840, P2841, and P2842, which cover shared machine learning, ethical computing, evaluation of deep learning, and secure multi-party computation, respectively.
For P2830, Standard for Technical Framework and Requirements of Shared Machine Learning, it aims to “define a framework and architectures for machine learning in which a model is trained using encrypted data that has been aggregated from multiple sources and is processed by a trusted third party” .
Shared machine learning is one approach to do machine learning on data sets that are owned by multiple parties who either do not wish to or not allowed to share their data but nonetheless wish to create models that use all of them. It is different from federated machine learning in which models are trained by each source and the sources share the models but not the data themselves. Use cases being considered for this standard are education, healthcare, and finance industries.
P2840, Standard for Responsible AI Licensing, is related to IEEE-SA’s emphasis on ethical computing and it aims to “describe specifications for the factors that shall be considered in the development of a Responsible Artificial Intelligence license” . The motivation of this initiative is that researchers and companies are concerned to disseminate algorithms, codes, and data that could potentially be used for unethical, immoral, or illegal purposes without a license that prescribes such use.
For P2841, Framework and Process for Deep Learning Evaluation, its scope is to “define best practices for developing and implementing deep learning algorithms and define a framework and criteria for evaluating algorithm reliability and quality of the resulting software systems” . By developing such best practices, it is expected that the number of flaws and failures in the industrial use cases of deep learning could be significantly reduced.
Lastly, P2842, Recommended Practice for Secure Multi-party Computation, it focuses on security aspects of multi-party computation (MPC) especially on the protection against unauthorized or unintended data breaches when MPC is applied. The scope of work includes the development of a technical framework and security levels of the MPC .