OPC UA Companion-Specification

OPC 40100-1

 

OPC UA for Machine Vision

Part 1: Control, configuration management, recipe management, result management

 

Release 1.0

2019-08

 

 

 

 

 

 

OPC 40100-1 (Edition 1.0, 2019-08) is identical with VDMA 40100-1:2019-08

 


 

VDMA Specification

July 2019

VDMA 40010-1

LDDG2W

 

ICS 25.040.30

 

 

OPC UA Companion Specification for Robotics (OPC Robotics) –
Part 1: Vertical integration

OPC UA Companion Specification for Robotics (OPC Robotics) –
Teil 1: Vertikale Integration

 

 

 

 

 

Document comprises 80 pages

VDMA

 

 

 

 

 

©  All rights reserved to VDMA e.V., Frankfurt/Main – Modification, amendment, editing, translation, copying and/or circulation only with permission in writing from VDMA e.V.

VDMA 40010-1:2019-07

Price group 29

VDMA Specification

August 2019

VDMA 40100-1

 

 

ICS 25.040.30

 

OPC UA for Machine Vision (OPC Machine Vision) –
Part 1: Control, configuration management, recipe management,
result management

OPC UA for Machine Vision (OPC Machine Vision) –
Teil 1: Steuerung, Konfigurationsverwaltung, Rezeptverwaltung, Ergebnisverwaltung

 

 

 

 

Document comprises 182 pages

VDMA

 

 

 

 

 

©  All rights reserved to VDMA e.V., Frankfurt/Main – Modification, amendment, editing, translation, copying and/or circulation only with permission in writing from VDMA e.V.

VDMA 40100-1:2019-08

 


Contents

Page

Contents. 4

Foreword. 14

1              Scope. 16

2              Normative references. 16

3              Terms, definitions and conventions. 17

3.1        Terms. 17

3.2        Abbreviations. 19

3.3        Conventions used in this document 19

3.3.1      Conventions for Node descriptions. 19

3.3.2      NodeIds and BrowseNames. 21

3.3.3      Common Attributes. 21

4              General information on Machine Vision and OPC UA. 24

4.1        Introduction to Machine Vision systems. 24

4.2        Introduction to OPC Unified Architecture. 26

4.2.1      What is OPC UA?. 26

4.2.2      Basics of OPC UA. 26

4.2.3      Information modelling in OPC UA. 27

5              Use cases. 32

6              OPC Machine Vision information model overview.. 33

7              ObjectTypes for the Vision System in General 34

7.1        VisionSystemType. 34

7.2        ConfigurationManagementType. 35

7.2.1      Overview. 35

7.2.2      ConfigurationManagementType methods. 37

7.3        ConfigurationFolderType. 44

7.4        ConfigurationTransferType. 44

7.4.1      Overview. 44

7.4.2      ConfigurationTransferType methods. 45

7.5        RecipeManagementType. 47

7.5.1      Overview. 47

7.5.2      RecipeManagementType Methods. 49

7.6        RecipeTransferType. 60

7.6.1      Overview. 60

7.6.2      RecipeTransferType Methods. 60

7.7        RecipeType. 62

7.7.1      Overview. 62

7.7.2      RecipeType Methods. 63

7.8        RecipeFolderType. 66

7.9        ProductFolderType. 67

7.10      ResultManagementType. 68

7.10.1        Overview. 68

7.10.2        ResultManagementType methods. 70

7.11      ResultFolderType. 76

7.12      ResultTransferType. 76

7.12.1        Overview. 76

7.12.2        ResultTransferType methods. 77

7.13      SafetyStateManagementType. 78

7.13.1        Overview. 78

7.13.2        SafetyStateManagementType methods. 79

8              ObjectTypes for Vision System State Handling. 80

8.1        State Machine overview. 80

8.1.1      Introduction. 80

8.1.2      Hierarchical state machines. 80

8.1.3      Automatic and triggered transitions and events. 81

8.1.4      Preventing transitions. 81

8.2        VisionStateMachineType. 81

8.2.1      Introduction. 81

8.2.2      Operation of the VisionStateMachineType. 82

8.2.3      VisionStateMachineType Overview. 85

8.2.4      Modes of operation. 85

8.2.5      VisionStateMachineType Definition. 86

8.2.6      VisionStateMachineType States. 87

8.2.7      VisionStateMachineType Transitions. 90

8.2.8      VisionStateMachineType Methods. 93

8.2.9      VisionStateMachineType EventTypes. 95

8.3        VisionAutomaticModeStateMachineType. 96

8.3.1      Introduction. 96

8.3.2      Operation of the “AutomaticMode” state machine. 98

8.3.3      VisionAutomaticModeStateMachineType Overview. 101

8.3.4      VisionAutomaticModeStateMachineType Definition. 102

8.3.5      VisionAutomaticModeStateMachineType States. 103

8.3.6      VisionAutomaticModeStateMachineType Transitions. 107

8.3.7      VisionAutomaticModeStateMachineType Methods. 109

8.3.8      VisionAutomaticModeStateMachineType Events. 113

8.3.9      Adding an operation mode. 119

8.4        VisionStepModelStateMachineType. 119

8.4.1      Operation of the VisionStepModelStateMachine. 119

8.4.2      VisionStepModelStateMachineType Overview. 121

8.4.3      VisionStepModelStateMachineType Definition. 121

8.4.4      VisionStepModelStateMachineType States. 122

8.4.5      VisionStepModelStateMachineType Transitions. 123

8.4.6      VisionStepModelStateMachineType Methods. 124

8.4.7      VisionStepModelStateMachine Events. 124

9              VariableTypes for the Vision System.. 127

9.1        ResultType. 127

10           EventTypes for the Vision System.. 130

10.1      VisionStateMachineType EventTypes. 130

10.2      VisionAutomaticModeStateMachineType EventTypes. 130

10.3      VisionStepModelStateMachineType EventTypes. 130

10.4      Vision System State EventTypes and ConditionTypes. 130

11           System States and Conditions for the Vision System.. 132

11.1      Introduction. 132

11.2      Client interaction. 132

11.2.1        Introduction. 132

11.2.2        No Interaction. 132

11.2.3        Acknowledgement 132

11.2.4        Confirmation. 132

11.2.5        Confirm All 133

11.3      Classes of Informational Elements. 133

11.3.1        Overview. 133

11.3.2        Diagnostic Information. 133

11.3.3        Information. 133

11.3.4        Warning. 133

11.3.5        Error 133

11.3.6        Persistent Error 133

11.4      EventTypes for Informational Elements. 133

11.4.1        VisionEventType. 133

11.4.2        VisionDiagnosticInfoEventType. 136

11.4.3        VisionInformationEventType. 136

11.4.4        VisionConditionType. 136

11.4.5        VisionWarningConditionType. 139

11.4.6        VisionErrorConditionType. 139

11.4.7        VisionPersistentErrorConditionType. 140

11.4.8        VisionSafetyEventType. 140

11.5      Interaction between Messages, State Machine, and Vision System.. 141

11.6      Structuring of Vision System State information. 143

11.6.1        Overview. 143

11.6.2        Production (PRD) 143

11.6.3        Standby (SBY) 143

11.6.4        Engineering (ENG) 143

11.6.5        Scheduled Downtime (SDT) 143

11.6.6        Unscheduled Downtime (UDT) 143

11.6.7        Nonscheduled Time (NST) 143

12           DataTypes for the Vision System.. 146

12.1      Handle. 146

12.2      TrimmedString. 146

12.3      TriStateBooleanDataType. 146

12.4      ProcessingTimesDataType. 146

12.5      MeasIdDataType. 146

12.6      PartIdDataType. 147

12.7      JobIdDataType. 147

12.8      BinaryIdBaseDataType. 148

12.9      RecipeIdExternalDataType. 148

12.10    RecipeIdInternalDataType. 148

12.11    RecipeTransferOptions. 148

12.12    ConfigurationDataType. 149

12.13    ConfigurationIdDataType. 149

12.14    ConfigurationTransferOptions. 149

12.15    ProductDataType. 149

12.16    ProductIdDataType. 150

12.17    ResultDataType. 150

12.18    ResultIdDataType. 151

12.19    ResultStateDataType. 152

12.20    ResultTransferOptions. 152

12.21    SystemStateDataType. 153

12.22    SystemStateDescriptionDataType. 153

13           Profiles and Namespaces. 154

13.1      Namespace Metadata. 154

13.2      Conformance Units. 154

13.2.1        Overview. 154

13.2.2        Server 154

13.2.3        Client 157

13.3      Facets and Profiles. 160

13.3.1        Overview. 160

13.3.2        Server 160

13.3.3        Client 167

13.4      Handling of OPC UA Namespaces. 174

A.1        Namespace and identifiers for Machine Vision Information Model 175

A.2        Profile URIs for Machine Vision Information Model 175

B.1        Recipe management 177

B.1.1     Terms used in recipe management 177

B.1.2     Recipes in general 177

B.1.3     Recipes on the vision system.. 178

B.1.4     Example for a recipe life cycle. 181

B.1.5     Recipes and the state of the vision system.. 181

B.1.6     Recipe-product relation. 183

B.1.7     Recipe transfer 183

 

Figures

Figure 1 – System model for OPC Machine Vision. 26

Figure 2 – The Scope of OPC UA within an Enterprise. 27

Figure 3 – A Basic Object in an OPC UA Address Space. 28

Figure 4 – The Relationship between Type Definitions and Instances. 29

Figure 5 – Examples of References between Objects. 30

Figure 6 – The OPC UA Information Model Notation. 30

Figure 7 – Overview of the OPC Machine Vision information model 33

Figure 8 – Overview VisionSystemType. 34

Figure 9 – Overview ConfigurationManagementType. 36

Figure 10 – Overview ConfigurationFolderType. 44

Figure 11 – Overview ConfigurationTransferType. 45

Figure 12 – Overview RecipeManagementType. 48

Figure 13 – RecipeTransferType. 60

Figure 14 – Overview RecipeType. 62

Figure 15 – Overview RecipeFolderType. 67

Figure 16 – Overview ProductFolderType. 68

Figure 17 – Overview ResultManagementType. 69

Figure 18 – Overview ResultFolderType. 76

Figure 19 – Overview ResultTransferType. 77

Figure 20 – Overview SafetyStateManagementType. 78

Figure 21 – Vision system state machine type hierarchy. 81

Figure 22 – States and transitions of the VisionStateMachineType. 82

Figure 23 – Overview VisionStateMachineType. 85

Figure 24 – States and transitions of the VisionAutomaticModeStateMachineType. 97

Figure 25 – Entering the VisionAutomaticModeStateMachine SubStateMachine. 100

Figure 26 – Overview VisionAutomaticModeStateMachineType. 101

Figure 27 – Overview RecipePreparedEventType. 113

Figure 28 – Overview JobStartedEventType. 114

Figure 29 – Overview ReadyEventType. 115

Figure 30 – Overview ResultReadyEventType. 116

Figure 31 – Overview AcquisitionDoneEventType. 118

Figure 32 – States and transitions of the VisionStepModelStateMachineType. 120

Figure 33 – Overview VisionStepModelStateMachineType. 121

Figure 34 – Overview EnterStepSequenceEvent 125

Figure 35 – Overview NextStepEvent 125

Figure 36 – Overview LeaveStepSequenceEventType. 126

Figure 37 – Overview ResultType. 127

Figure 38 – Overview VisionEventType. 134

Figure 39 – Overview VisionDiagnosticInfoEventType. 136

Figure 40 – Overview VisionInformationEventType. 136

Figure 41 – Overview VisionConditionType. 137

Figure 42 – Overview VisionWarningConditionType. 139

Figure 43 – Overview VisionErrorConditionType. 139

Figure 44 – Overview VisionPersistentErrorConditionType. 140

Figure 45 – Overview VisionSafetyEventType. 141


 

Tables

Table 1 – Terms. 17

Table 2 – Abbreviations. 19

Table 3 – Examples of DataTypes. 20

Table 4 – Type Definition Table. 21

Table 5 – Common Node Attributes. 22

Table 6 – Common Object Attributes. 22

Table 7 – Common Variable Attributes. 22

Table 8 – Common VariableType Attributes. 23

Table 9 – Common Method Attributes. 23

Table 10 – Definition of VisionSystemType. 35

Table 11 – Definition of ConfigurationManagementType. 37

Table 12 – AddConfiguration Method Arguments. 38

Table 13 – AddConfiguration Method AddressSpace Definition. 38

Table 14 – GetConfigurationById Method Arguments. 40

Table 15 – GetConfigurationById Method AddressSpace Definition. 40

Table 16 – GetConfigurationList Method Arguments. 41

Table 17 – GetConfigurationList Method AddressSpace Definition. 41

Table 18 – ReleaseConfigurationHandle Method Arguments. 42

Table 19 – ReleaseConfigurationHandle Method AddressSpace Definition. 42

Table 20 – RemoveConfiguration Method Arguments. 43

Table 21 – RemoveConfiguration Method AddressSpace Definition. 43

Table 22 – ActivateConfiguration Method Arguments. 43

Table 23 – ActivateConfiguration Method AddressSpace Definition. 44

Table 24 – Definition of ConfigurationFolderType. 44

Table 25 – Definition of ConfigurationTransferType. 45

Table 26 – GenerateFileForRead Method Arguments. 46

Table 27 – GenerateFileForRead Method AddressSpace Definition. 46

Table 28 – GenerateFileForWrite Method Arguments. 47

Table 29 – GenerateFileForWrite Method AddressSpace Definition. 47

Table 30 – Definition of RecipeManagementType. 49

Table 31 – AddRecipe Method Arguments. 50

Table 32 – AddRecipe Method AddressSpace Definition. 50

Table 33 – PrepareRecipe Method Arguments. 52

Table 34 – PrepareRecipe Method AddressSpace Definition. 52

Table 35 – UnprepareRecipe Method Arguments. 53

Table 36 – UnprepareRecipe Method AddressSpace Definition. 54

Table 37 – GetRecipeListFiltered Method Arguments. 55

Table 38 – GetRecipeListFiltered Method AddressSpace Definition. 55

Table 39 – ReleaseRecipeHandle Method Arguments. 56

Table 40 – ReleaseRecipeHandle Method AddressSpace Definition. 56

Table 41 – RemoveRecipe Method Arguments. 57

Table 42 – RemoveRecipe Method AddressSpace Definition. 57

Table 43 – PrepareProduct Method Arguments. 58

Table 44 – PrepareProduct Method AddressSpace Definition. 58

Table 45 – UnprepareProduct Method Arguments. 58

Table 46 – UnprepareProduct Method AddressSpace Definition. 59

Table 47 – UnlinkProduct Method Arguments. 59

Table 48 – UnlinkProduct Method AddressSpace Definition. 59

Table 49 – Definition of RecipeTransferType. 60

Table 50 – GenerateFileForRead Method Arguments. 61

Table 51 – GenerateFileForRead Method AddressSpace Definition. 61

Table 52 – GenerateFileForWrite Method Arguments. 61

Table 53 – GenerateFileForWrite Method AddressSpace Definition. 62

Table 54 – Definition of RecipeType. 63

Table 55 – LinkProduct Method Arguments. 64

Table 56 – LinkProduct Method AddressSpace Definition. 64

Table 57 – UnlinkProduct Method Arguments. 65

Table 58 – UnlinkProduct Method AddressSpace Definition. 65

Table 59 – Prepare Method Arguments. 65

Table 60 – Prepare Method AddressSpace Definition. 65

Table 61 – Unprepare Method Arguments. 66

Table 62 – Unprepare Method AddressSpace Definition. 66

Table 63 – Definition of RecipeFolderType. 67

Table 64 – Definition of ProductFolderType. 68

Table 65 – Definition of ResultManagementType. 69

Table 66 – GetResultById Method Arguments. 70

Table 67 – GetResultById Method AddressSpace Definition. 70

Table 68 – GetResultComponentsById Method Arguments. 72

Table 69 – GetResultComponentsById Method AddressSpace Definition. 73

Table 70 – GetResultListFiltered Method Arguments. 74

Table 71 – GetResultListFiltered Method AddressSpace Definition. 75

Table 72 – ReleaseResultHandle Method Arguments. 75

Table 73 – ReleaseResultHandle Method AddressSpace Definition. 75

Table 74 – Definition of ResultFolderType. 76

Table 75 – Definition of ResultTransferType. 77

Table 76 – GenerateFileForRead Method Arguments. 77

Table 77 – GenerateFileForRead Method AddressSpace Definition. 78

Table 78 – Definition of SafetyStateManagementType. 78

Table 79 – ReportSafetyState Method Arguments. 79

Table 80 – ReportSafetyState Method AddressSpace Definition. 79

Table 81 – VisionStateMachineType Address Space Definition. 87

Table 82 – VisionStateMachineType States. 88

Table 83 – VisionStateMachineType State Descriptions. 89

Table 84 – VisionStateMachineType Transitions. 91

Table 85 – Halt Method Arguments. 93

Table 86 – Halt Method AddressSpace Definition. 93

Table 87 – Reset Method Arguments. 94

Table 88 – Reset Method AddressSpace Definition. 94

Table 89 – SelectModeAutomatic Method Arguments. 94

Table 90 – SelectModeAutomatic Method AddressSpace Definition. 94

Table 91 – ConfirmAll Method Arguments. 95

Table 92 – ConfirmAll Method AddressSpace Definition. 95

Table 93 – StateChangedEventType AddressSpace Definition. 95

Table 94 – ErrorEventType AddressSpace Definition. 95

Table 95 – ErrorResolvedEventType AddressSpace Definition. 96

Table 96 – VisionAutomaticModeStateMachineType definition. 102

Table 97 – VisionAutomaticModeStateMachineType States. 103

Table 98 – VisionAutomaticModeStateMachineType State Descriptions. 104

Table 99 – VisionAutomaticModeStateMachineType transitions. 107

Table 100 – StartSingleJob Method Arguments. 109

Table 101 – StartSingleJob Method AddressSpace Definition. 109

Table 102 – StartContinuous Method AddressSpace Definition. 110

Table 103 – Abort Method Arguments. 111

Table 104 – Abort Method AddressSpace Definition. 111

Table 105 – Stop Method Arguments. 111

Table 106 – Stop Method AddressSpace Definition. 112

Table 107 – SimulationMode Method Arguments. 112

Table 108 – SimulationMode Method AddressSpace Definition. 112

Table 109 – Definition of RecipePreparedEventType. 113

Table 110 – Definition of JobStartedEventType. 114

Table 111 – Definition of ReadyEventType. 115

Table 112 – Definition of ResultReadyEventType. 116

Table 113 – Definition of AcquisitionDoneEventType. 118

Table 114 – VisionStepModelStateMachineType definition. 122

Table 115 – VisionStepModelStateMachineType states. 122

Table 116 – VisionStepModelStateMachineType state descriptions. 123

Table 117 – VisionStepModelStateMachineType transitions. 123

Table 118 – Sync Method Arguments. 124

Table 119 – Sync Method AddressSpace Definition. 124

Table 120 – EnterStepSequenceEventType definition. 125

Table 121 – NextStepEventType definition. 126

Table 122 – LeaveStepSequenceEventType definition. 126

Table 123 – ResultType VariableType. 128

Table 124 – VisionStateMachineType EventTypes. 130

Table 125 – VisionAutomaticModeStateMachineType EventTypes. 130

Table 126 – VisionStepModelStateMachineType EventTypes. 130

Table 127 – Vision System State EventTypes and ConditionTypes. 131

Table 128 – Information Elements. 133

Table 129 – VisionEventType Definition. 135

Table 130 – VisionDiagnosticInfoEventType. 136

Table 131 – VisionInformationEventType. 136

Table 132 – VisionConditionType. 138

Table 133 – VisionWarningConditionType. 139

Table 134 – VisionErrorConditionType. 140

Table 135 – VisionPersistentErrorConditionType. 140

Table 136 – VisionSafetyEventType Definition. 141

Table 137 – E10 system states. 143

Table 138 – Basic error paths. 144

Table 139 – Values of TriStateBooleanDataType. 146

Table 140 – Definition of ProcessingTimesDataType. 146

Table 141 – Definition of MeasIdDataType. 147

Table 142 – Definition of PartIdDataType. 147

Table 143 – Definition of JobIdDataType. 147

Table 144 – Definition of BinaryIdBaseDataType. 148

Table 145 – RecipeTransferOptions structure. 148

Table 146 – Definition of ConfigurationDataType. 149

Table 147 – Definition of ConfigurationTransferOptions. 149

Table 148 – Definition of ProductDataType. 149

Table 149 – Definition of ProductIdDataType. 150

Table 150 – Definition of ResultDataType. 151

Table 151 – Definition of ResultIdDataType. 152

Table 152 – Definition of ResultStateDataType. 152

Table 153 – Values of ResultStateDataType. 152

Table 154 – Definition of ResultTransferOptions. 152

Table 155 – Values of SystemStateDataType. 153

Table 156 – Definition of SystemStateDescriptionDataType. 153

Table 157 – NamespaceMetadata Object for this Specification. 154

Table 158 – Definition of Server Conformance Units. 154

Table 159 – Definition of Client Conformance Units. 157

Table 160 – Server Facets. 160

Table 161 – Definition of Basic Vision System Server Facet 161

Table 162 – Definition of Inline Vision System Server Facet 162

Table 163 – Definition of Automatic Mode Server Facet 162

Table 164 – Definition of Processing Times Server Facet 162

Table 165 – Definition of File Transfer Server Facet 163

Table 166 – Definition of Basic Result Handling Server Facet 163

Table 167 – Definition of Inline Result Handling Server Facet 163

Table 168 – Definition of Full Result Handling Server Facet 163

Table 169 – Definition of Standard Configuration Handling Server Facet 164

Table 170 – Definition of Full Configuration Handling Server Facet 164

Table 171 – Definition of Standard Recipe Handling Server Facet 164

Table 172 – Definition of Full Recipe Handling Server Facet 164

Table 173 – Definition of Basic Vision System Server Profile. 165

Table 174 – Definition of Basic Vision System Server Profile without OPC UA Security. 165

Table 175 – Definition of Simple Inline Vision System Server Profile. 165

Table 176 – Definition of Simple Inline Vision System with File Transfer Server Profile. 166

Table 177 – Definition of Simple Inline Vision System with File Revisioning Server Profile. 166

Table 178 – Definition of Inline Vision System with File Transfer Server Profile. 166

Table 179 – Definition of Inline Vision System with File Revisioning Server Profile. 166

Table 180 – Definition of Full Vision System Server Profile. 167

Table 181 – Definition of Client Facets. 167

Table 182 – Definition of Basic Control Client Facet 168

Table 183 – Definition of Full Control Client Facet 168

Table 184 – Definition of Basic Result Content Client Facet 169

Table 185 – Definition of Simple Result Content Client Facet 169

Table 186 – Definition of Full Result Content Client Facet 169

Table 187 – Definition of Result Meta Data Client Facet 169

Table 188 – Definition of Configuration Handling Client Facet 170

Table 189 – Definition of Recipe Handling Client Facet 170

Table 190 – Definition of Vision State Monitoring Client Facet 171

Table 191 – Definition of Production Quality Monitoring Client Facet 171

Table 192 – Definition of Data Backup Client Facet 171

Table 193 – Definition of Basic Control Client Profile. 172

Table 194 – Definition of Simple Control Client Profile. 172

Table 195 – Definition of Full Control Client Profile. 173

Table 196 – Definition of Result Content Client Profile. 173

Table 197 – Definition of Monitoring Client Profile. 173

Table 198 – Definition of Configuration Management Client Profile. 173

Table 199 – Namespaces used in a MachineVision Server 174

Table 200 – Namespaces used in this specification. 174

Table A.1 – Profile URIs……………………………………………………………………………………………….175

OPC FOUNDATION, VDMA

____________

AGREEMENT OF USE

COPYRIGHT RESTRICTIONS

·       This document is provided "as is" by the OPC Foundation and the VDMA

·       Right of use for this specification is restricted to this specification and does not grant rights of use for referred documents.

·       Right of use for this specification will be granted without cost.

·       This document may be distributed through computer systems, printed or copied as long as the content remains unchanged and the document is not modified.

·       OPC Foundation and VDMA do not guarantee usability for any purpose and shall not be made liable for any case using the content of this document.

·       The user of the document agrees to indemnify OPC Foundation and VDMA and their officers, directors and agents harmless from all demands, claims, actions, losses, damages (including damages from personal injuries), costs and expenses (including attorneys' fees) which are in any way related to activities associated with its use of content from this specification.

·       The document shall not be used in conjunction with company advertising, shall not be sold or licensed to any party.

·       The intellectual property and copyright is solely owned by the OPC Foundation and the VDMA.

 

PATENTS

The attention of adopters is directed to the possibility that compliance with or adoption of OPC or VDMA specifications may require use of an invention covered by patent rights. OPC Foundation or VDMA shall not be responsible for identifying patents for which a license may be required by any OPC or VDMA specification, or for conducting legal inquiries into the legal validity or scope of those patents that are brought to its attention. OPC or VDMA specifications are prospective and advisory only. Prospective users are responsible for protecting themselves against liability for infringement of patents.

WARRANTY AND LIABILITY DISCLAIMERS

WHILE THIS PUBLICATION IS BELIEVED TO BE ACCURATE, IT IS PROVIDED "AS IS" AND MAY CONTAIN ERRORS OR MISPRINTS. THE OPC FOUDATION NOR VDMA MAKES NO WARRANTY OF ANY KIND, EXPRESSED OR IMPLIED, WITH REGARD TO THIS PUBLICATION, INCLUDING BUT NOT LIMITED TO ANY WARRANTY OF TITLE OR OWNERSHIP, IMPLIED WARRANTY OF MERCHANTABILITY OR WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE OR USE. IN NO EVENT SHALL THE OPC FOUNDATION NOR VDMA BE LIABLE FOR ERRORS CONTAINED HEREIN OR FOR DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, RELIANCE OR COVER DAMAGES, INCLUDING LOSS OF PROFITS, REVENUE, DATA OR USE, INCURRED BY ANY USER OR ANY THIRD PARTY IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS MATERIAL, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

The entire risk as to the quality and performance of software developed using this specification is borne by the user of this specification.

RESTRICTED RIGHTS LEGEND

This Specification is provided with Restricted Rights. Use, duplication or disclosure by the U.S. government is subject to restrictions as set forth in (a) this Agreement pursuant to DFARs 227.7202-3(a); (b) subparagraph (c)(1)(i) of the Rights in Technical Data and Computer Software clause at DFARs 252.227-7013; or (c) the Commercial Computer Software Restricted Rights clause at FAR 52.227-19 subdivision (c)(1) and (2), as applicable. Contractor / manufacturer are the OPC Foundation, 16101 N. 82nd Street, Suite 3B, Scottsdale, AZ, 85260-1830

Trademarks

Most computer and software brand names have trademarks or registered trademarks. The individual trademarks have not been listed here.

GENERAL PROVISIONS

Should any provision of this Agreement be held to be void, invalid, unenforceable or illegal by a court, the validity and enforceability of the other provisions shall not be affected thereby.

This Agreement shall be governed by and construed under the laws of Germany.

This Agreement embodies the entire understanding between the parties with respect to, and supersedes any prior understanding or agreement (oral or written) relating to, this specification.


 

Foreword

The following document OPC UA Companion Specification for Machine Vision, part 1 (short: OPC Machine Vision, part 1) is a joined document from VDMA and OPC Foundation.

It summarizes the results of the VDMA OPC Machine Vision Initiative, containing contributions from all its members.

Carsten Born

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH

Matthias Damm

ascolab GmbH

Bernd Fiebiger

KUKA Deutschland GmbH

Thomas Freundlich

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH

Gerhard Helfrich

STEMMER IMAGING AG

Reinhard Heister

VDMA Robotics + Automation

Christian Hoffmann

PEER Group GmbH

Karlheinz Hohm

ISRA VISION AG

Ricardo Juárez Acuña

MVTec Software GmbH

Ralf Lay

Silicon Software GmbH

Christopher Leroi

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH

Wolfgang Mahnke

ascolab GmbH

Axel Schröder

ASENTICS GmbH & Co. KG

Thomas Schüttler

ASENTICS GmbH & Co. KG

Jure Skvarc

Kolektor Group d.o.o.

Mirko Tänzler

SAC Sirius Advanced Cybernetics GmbH

Peter Waszkewitz

Robert Bosch Manufacturing Solutions GmbH


Under the oversight of the Steering Committee of

Horst Heinol-Heikkinen (Chairman)

ASENTICS GmbH & Co. KG

Heiko Frohn

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH

Klaus-Henning Noffz

Silicon Software GmbH

Christian Ripperda

ISRA VISION AG

 

Technological outline

Today’s integration of machine vision systems into production control and IT systems is characterized by the development of proprietary (case by case / company by company) interfaces. In many cases, this means an interface development for every single machine vision project, which results in very time-consuming, costly and error-prone efforts.

Currently, no generic interface for machine vision systems on the application / solution level exists that might be used as basis for the companion specification. Therefore, an OPC UA Companion Specification for Machine Vision shall be developed as a standardization project with global reach under the G3 agreement.

OPC Unified Architecture is an industrial M2M communication technology for interoperability, providing secure, reliable and manufacturer-neutral transport of data and pre-processed information from the manufacturing level into IT, production planning or ERP systems. Domain groups are asked to develop companion specifications: i.e. to decide which domain specific services and information are offered, which information and data are to be transferred.


 

Benefits for the machine vision industry

Through the OPC UA interface, relevant data achieves a broader reach on all levels, e.g. Control Device, Station and Enterprise levels, as well as a managed data flow. In connection with the industry 4.0 movement the relevance of machine vision systems will increase in all their roles due to the rich data they can provide on products – for quality assurance, track and trace, etc. – as well as processes – for process guidance, optimization, digital twinning, data analytics and other applications which we may not even foresee yet. The OPC UA interface will also enable a plug and play integration of a machine vision system into its process environment. These benefits will significantly advance the growth and use of machine vision systems.

 

Benefits for machine vision users

Easy and widespread accessibility of relevant data and a managed data flow will benefit users of machine vision through new application abilities and business models. In addition, a commonly accepted interface with global reach will reduce implementation times and reduce development costs for system integrators and users of machine vision systems.

 

VDMA Machine Vision

The VDMA (Verband Deutscher Maschinen- und Anlagenbau, Mechanical Engineering Industry Association) represents over 3,200 mainly small and medium size member companies in the engineering industry, making it one of the largest and most important industrial associations in Europe. As part of the VDMA Robotics + Automation association, VDMA Machine Vision unites more than 115 members: companies offering machine vision systems and components (cameras, optics, illumination, software, etc.). The objective of this industry-driven platform is to support the machine vision industry through a wide spectrum of activities and services such as standardization, statistics, marketing, public relations, trade fair policy, networking events and representation of interests. As member of the G3 agreement, VDMA Machine Vision cooperates in the field of standardization with other international machine vision associations, such as AIA (USA), CMVU (China), EMVA (Europe), and JIIA (Japan).

 

OPC Foundation

Originally derived from the Windows technology OLE for Process Control, the acronym OPC today stands for Open Platform Communication.

The OPC Foundation was established in 1998 to manage a global organization in which users, vendors and consortia collaborate to create data transfer standards for interoperability in industrial automation.

To support this mission, the OPC Foundation:

      Creates and maintains specifications

      Ensures compliance with OPC specifications via certification testing

      Collaborates with industry-leading standards organizations

The OPC Foundation has more than 450 OPC members, from small system integrators to the world’s largest automation and industrial suppliers.

See https://opcfoundation.org/ for more information on the OPC Foundation and OPC UA; last visited May 4th, 2018)


 

1       Scope

This document specifies an OPC UA Information Model for the representation of a machine vision system. OPC Machine Vision, part 1 aims at straightforward integration of a machine vision system into production control and IT systems. The scope is not only to complement or substitute existing interfaces between a machine vision system and its process environment by OPC UA, but also to create non-existent horizontal and vertical integration abilities to communicate relevant data to other authorized process participants, e.g. up to the IT enterprise level. To this end, the OPC Machine Vision interface allows for the exchange of information between a machine vision system and another machine vision system, a station PLC, a line controller, or any other software system in areas like MES, SCADA, ERP or data analytics systems.

 

2       Normative references

The following documents, in whole or in part, are normatively referenced in this document and are indispensable for its application. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

OPC 10000-1, OPC Unified Architecture - Part 1: Overview and Concepts.

      http://www.opcfoundation.org/UA/Part1/

OPC 10000-2, OPC Unified Architecture - Part 2: Security Model.

      http://www.opcfoundation.org/UA/Part2/

OPC 10000-3, OPC Unified Architecture - Part 3: Address Space Model.

      http://www.opcfoundation.org/UA/Part3/

OPC 10000-4, OPC Unified Architecture - Part 4: Services.

      http://www.opcfoundation.org/UA/Part4/

OPC 10000-5, OPC Unified Architecture - Part 5: Information Model.

      http://www.opcfoundation.org/UA/Part5/

OPC 10000-6, OPC Unified Architecture - Part 6: Mappings.

      http://www.opcfoundation.org/UA/Part6/

OPC 10000-7, OPC Unified Architecture - Part 7: Profiles.

      http://www.opcfoundation.org/UA/Part7/

OPC 10000-9, OPC Unified Architecture - Part 9: Alarms & Conditions.

      http://www.opcfoundation.org/UA/Part9/

SEMI E10-0312: SEMI E10 Standard: Specification for Definition and Measurement of Equipment Reliability, Availability, and Maintainability (RAM) and Utilization).


 

3       Terms, definitions and conventions

3.1      Terms

Table 1 – Terms

Term

Definition of Term

Camera

Vision sensor that is capable of extracting information from electro-magnetic waves.

Client

Receiver of information. Requests services from a server, usually OPC Machine Vision system.

Configuration

Information stored in a configuration ensures that different vision systems generate equal results if same recipe is used.

Environment

The set of external entities working with the vision system in one way or another, e.g. PLC, MES, etc.

External

Not part of the vision system or the OPC UA server; may refer to the automation system, the manufacturing execution system or other entities

Job

The main purpose of a machine vision system is to execute jobs. Job may be a simple task such as measurement of a part’s diameter, or much more complex, like surface inspection of a long, continuous roll of a printing paper.

Machine Vision System

A system for machine vision is any complex information processing system / smart camera / vision sensor / other component which, in the production context, is capable of extracting information from electro-magnetic waves in accordance with a given image processing task.

Inline Machine Vision System

Denotes a machine vision system which is used in the manner of a system working continuously within a production line (hence the name). This can mean 100% quality inspection, as well as providing poses for robot-guidance for all parts or inspection of the entire area of a continuous material stream and other similar use cases.

Product

In an industrial environment a machine vision system is usually used to check products that are manufactured. The name of such a product is often used outside the machine vision system to reference recipes of the devices used to manufacture the product. This eliminates the need for the external production control systems to know the IDs of local recipes of each device.

Recipe

Properties, procedures and parameters that describe a machine vision job for the vision system are stored in a recipe. The actual content of the data structure is out of the scope of this specification.

Server

Information provider classified by the services it provides. Vision system commonly acts as OPC UA server.

State Machine

A finite-state machine (FSM) or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The state machine can change from one state to another in response to some external inputs. The change from one state to another is called a transition. A state machine is defined by a list of its states, its initial state, and the conditions for each transition.

System-wide unique

Used in conjunction with identifiers and handles to denote that at any given time no other entity of the same type and meaning shall exist in the OPC UA server with the same value. No further assumptions about global or historical uniqueness are made; especially in the case of identifiers, however, globally unique identifiers are recommended.

Vision System

The underlying machine vision system for which the OPC UA server provides an abstracted view.

WebSocket

WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection.


 

3.2      Abbreviations

 

Table 2 – Abbreviations

Abbreviation

Definition of Abbreviation

AC

Alarm and Condition

BLOB

BLOB, a Binary Large Object is a collection of binary data stored as a single entity in a database management system.

DCS

DCS, a distributed control system is a computerised control system for a process or plant usually with a large number of control loops, in which autonomous controllers are distributed throughout the system, but there is central operator supervisory control. The DCS concept increases reliability and reduces installation costs by localising control functions near the process plant, with remote monitoring and supervision.

ERP

ERP, the Enterprise resource planning is the integrated management of core business processes, often in real-time and mediated by software and technology.

HMI

The user interface or human–machine interface is the part of the machine that handles the human–machine interaction.

HTTP

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, and hypermedia information systems.

ID

Identifer

MES

MES, manufacturing execution systems are computerized systems used in manufacturing, to track and document the transformation of raw materials to finished goods. MES provides information that helps manufacturing decision makers understand how current conditions on the plant floor can be optimized to improve production output.

PLC

PLC, a programmable logic controller, or programmable controller is an industrial digital computer which has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, or robotic devices, or any activity that requires high reliability control and ease of programming and process fault diagnosis.

PMS

PMS, the Product Manufacturing System is generally a non-critical system for manufacturing activities, as it establishes a communication with the board line systems that directly and physically handle production progress.

TCP/IP

The Internet protocol suite is the conceptual model and set of communications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP).

 

3.3      Conventions used in this document

3.3.1      Conventions for Node descriptions

Node definitions are specified using tables (see Table 4).

Attributes are defined by providing the Attribute name and a value, or a description of the value.

References are defined by providing the ReferenceType name, the BrowseName of the TargetNode and its NodeClass.

      If the TargetNode is a component of the Node being defined in the table, the Attributes of the composed Node are defined in the same row of the table.

      The DataType is only specified for Variables; “[number>]” indicates a single-dimensional array, for multi-dimensional arrays the expression is repeated for each dimension (e.g. [2][3] for a two-dimensional array). For all arrays the ArrayDimensions is set as identified by <number> values. If no <number> is set, the corresponding dimension is set to 0, indicating an unknown size. If no number is provided at all the ArrayDimensions can be omitted. If no brackets are provided, it identifies a scalar DataType and the ValueRank is set to the corresponding value (see OPC 10000-3). In addition, ArrayDimensions is set to null or is omitted. If it can be Any or ScalarOrOneDimension, the value is put into “{<value>}”, so either “{Any}” or “{ScalarOrOneDimension}” and the ValueRank is set to the corresponding value (see OPC 10000-3) and the ArrayDimensions is set to null or is omitted. Examples are given in Table 3.

 

Table 3 – Examples of DataTypes

Notation

Data­Type

Value­Rank

Array­Dimensions

Description

Int32

Int32

-1

omitted or null

A scalar Int32.

Int32[]

Int32

 1

omitted or {0}

Single-dimensional array of Int32 with an unknown size.

Int32[][]

Int32

 2

omitted or {0,0}

Two-dimensional array of Int32 with unknown sizes for both dimensions.

Int32[3][]

Int32

 2

{3,0}

Two-dimensional array of Int32 with a size of 3 for the first dimension and an unknown size for the second dimension.

Int32[5][3]

Int32

 2

{5,3}

Two-dimensional array of Int32 with a size of 5 for the first dimension and a size of 3 for the second dimension.

Int32{Any}

Int32

-2

omitted or null

An Int32 where it is unknown if it is scalar or array with any number of dimensions.

Int32{ScalarOrOneDimension}

Int32

-3

omitted or null

An Int32 where it is either a single-dimensional array or a scalar.

 

      The TypeDefinition is specified for Objects and Variables.

      The TypeDefinition column specifies a symbolic name for a NodeId, i.e. the specified Node points with a HasTypeDefinitionReference to the corresponding Node.

      The ModellingRule of the referenced component is provided by specifying the symbolic name of the rule in the ModellingRule column. In the AddressSpace, the Node shall use a HasModellingRuleReference to point to the corresponding ModellingRuleObject.

If the NodeId of a DataType is provided, the symbolic name of the Node representing the DataType shall be used.

Nodes of all other NodeClasses cannot be defined in the same table; therefore only the used ReferenceType, their NodeClass and their BrowseName are specified. A reference to another part of this document points to their definition.

Table 4 illustrates the table. If no components are provided, the DataType, TypeDefinition and ModellingRule columns may be omitted and only a Comment column is introduced to point to the Node definition.

 

Table 4 – Type Definition Table

Attribute

Value

Attribute name

Attribute value. If it is an optional Attribute that is not set “--“ will be used.

 

 

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

ReferenceType name

NodeClass of the target Node.

BrowseName of the target Node. If the Reference is to be instantiated by the server, then the value of the target Node’s BrowseName is “--“.

DataType of the referenced Node, only applicable for Variables.

TypeDefinition of the referenced Node, only applicable for Variables and Objects.

Referenced ModellingRule of the referenced Object.

NOTE Notes referencing footnotes of the table content.

 

Components of Nodes can be complex that is containing components by themselves. The TypeDefinition, NodeClass, DataType and ModellingRule can be derived from the type definitions, and the symbolic name can be created as defined in Section 3.3.3.1. Therefore, those containing components are not explicitly specified; they are implicitly specified by the type definitions.

3.3.2      NodeIds and BrowseNames

3.3.2.1      NodeIds

The NodeIds of all Nodes described in this standard are only symbolic names. Annex B defines the actual NodeIds.

The symbolic name of each Node defined in this specification is its BrowseName, or, when it is part of another Node, the BrowseName of the other Node, a “.”, and the BrowseName of itself. In this case “part of” means that the whole has a HasProperty or HasComponentReference to its part. Since all Nodes not being part of another Node have a unique name in this specification, the symbolic name is unique.

The namespace for all NodeIds defined in this specification is defined in Table 200. The namespace for this NamespaceIndex is Server-specific and depends on the position of the namespace URI in the server namespace table.

Note that this specification not only defines concrete Nodes, but also requires that some Nodes shall be generated, for example one for each Session running on the Server. The NodeIds of those Nodes are Server-specific, including the namespace. But the NamespaceIndex of those Nodes cannot be the NamespaceIndex used for the Nodes defined in this specification, because they are not defined by this specification but generated by the Server.

3.3.2.2      BrowseNames

The text part of the BrowseNames for all Nodes defined in this specification is specified in the tables defining the Nodes. The NamespaceIndex for all BrowseNames defined in this specification is defined in Annex A.

If the BrowseName is not defined by this specification, a namespace index prefix like ‘0:EngineeringUnits’ or ‘2:DeviceRevision’ is added to the BrowseName. This is typically necessary if a Property of another specification is overwritten or used in the OPC UA types defined in this specification. Table 200 provides a list of namespaces and their indexes as used in this specification.

3.3.3      Common Attributes

3.3.3.1      General

The Attributes of Nodes, their DataTypes and descriptions are defined in OPC 10000-3. Attributes not marked as optional are mandatory and shall be provided by a Server. The following tables define if the Attribute value is defined by this specification or if it is server-specific.

For all Nodes specified in this specification, the Attributes named in Table 5 shall be set as specified in the table.

Table 5 – Common Node Attributes

Attribute

Value

DisplayName

The DisplayName is a LocalizedText. Each server shall provide the DisplayName identical to the BrowseName of the Node for the LocaleId “en”. Whether the server provides translated names for other LocaleIds is server-specific.

Description

Optionally a server-specific description is provided.

NodeClass

Shall reflect the NodeClass of the Node.

NodeId

The NodeId is described by BrowseNames as defined in 3.3.2.1.

WriteMask

Optionally the WriteMaskAttribute can be provided. If the WriteMaskAttribute is provided, it shall set all non-server-specific Attributes to not writable. For example, the DescriptionAttribute may be set to writable since a Server may provide a server-specific description for the Node. The NodeId shall not be writable, because it is defined for each Node in this specification.

UserWriteMask

Optionally the UserWriteMaskAttribute can be provided. The same rules as for the WriteMaskAttribute apply.

RolePermissions

Optionally server-specific role permissions can be provided.

UserRolePermissions

Optionally the role permissions of the current Session can be provided. The value is server-specifc and depend on the RolePermissionsAttribute (if provided) and the current Session.

AccessRestrictions

Optionally server-specific access restrictions can be provided.

 

3.3.3.2      Objects

For all Objects specified in this specification, the Attributes named in Table 6 shall be set as specified in the table. The definitions for the Attributes can be found in OPC 10000-3.

 

Table 6 – Common Object Attributes

Attribute

Value

EventNotifier

Whether the Node can be used to subscribe to Events or not is server-specific.

 

3.3.3.3      Variables

For all Variables specified in this specification, the Attributes named in Table 7 shall be set as specified in the table. The definitions for the Attributes can be found in OPC 10000-3.

 

Table 7 – Common Variable Attributes

Attribute

Value

MinimumSamplingInterval

Optionally, a server-specific minimum sampling interval is provided.

AccessLevel

The access level for Variables used for type definitions is server-specific, for all other Variables defined in this specification, the access level shall allow reading; other settings are server-specific.

UserAccessLevel

The value for the UserAccessLevelAttribute is server-specific. It is assumed that all Variables can be accessed by at least one user.

Value

For Variables used as InstanceDeclarations, the value is server-specific; otherwise it shall represent the value described in the text.

ArrayDimensions

If the ValueRank does not identify an array of a specific dimension (i.e. ValueRank <= 0) the ArrayDimensions can either be set to null or the Attribute is missing. This behaviour is server-specific.

If the ValueRank specifies an array of a specific dimension (i.e. ValueRank > 0) then the ArrayDimensionsAttribute shall be specified in the table defining the Variable.

Historizing

The value for the HistorizingAttribute is server-specific.

AccessLevelEx

If the AccessLevelExAttribute is provided, it shall have the bits 8, 9, and 10 set to 0, meaning that read and write operations on an individual Variable are atomic, and arrays can be partly written.

 

3.3.3.4      VariableTypes

For all VariableTypes specified in this specification, the Attributes named in Table 8 shall be set as specified in the table. The definitions for the Attributes can be found in OPC 10000-3.

Table 8 – Common VariableType Attributes

Attributes

Value

Value

Optionally a server-specific default value can be provided.

ArrayDimensions

If the ValueRank does not identify an array of a specific dimension (i.e. ValueRank <= 0) the ArrayDimensions can either be set to null or the Attribute is missing. This behaviour is server-specific.

If the ValueRank specifies an array of a specific dimension (i.e. ValueRank > 0) then the ArrayDimensionsAttribute shall be specified in the table defining the VariableType.

 

3.3.3.5      Methods

For all Methods specified in this specification, the Attributes named in Table 9 shall be set as specified in the table. The definitions for the Attributes can be found in OPC 10000-3.

Table 9 – Common Method Attributes

Attributes

Value

Executable

All Methods defined in this specification shall be executable (ExecutableAttribute set to “True”), unless it is defined differently in the Method definition.

UserExecutable

The value of the UserExecutableAttribute is server-specific. It is assumed that all Methods can be executed by at least one user.

 


 

4       General information on Machine Vision and OPC UA

4.1      Introduction to Machine Vision systems

Machine vision systems are immensely diverse. This specification is based on a conceptual model of what constitutes a machine vision system’s functionality. Making good use of the specification requires an understanding of this conceptual model. It will be touched only briefly in this section, more details can be found in Annex B.

A machine vision system is any computer system, smart camera, vision sensor or even any other component that has the capability to record and process digital images or videostreams for the shop floor or other industrial markets, typically with the aim of extracting information from this data.

Digital images or video streams represent data in a general sense, comprising multiple spatial dimensions (e.g. 1D scanner lines, 2D camera images, 3D point clouds, image sequences, etc.) acquired by any kind of imaging technique (e.g. visible light, infrared, ultraviolet, x-ray, radar, ultrasonic, virtual imaging etc.).

With respect to a specific machine vision task, the output of a machine vision system can be raw or pre-processed images or any image-based measurements, inspection results, process control data, robot guidance data, etc.

Machine vision therefore covers a very broad range of systems as well as of applications.

System types range from small sensors and smart cameras to multi-computer setups with diverse sensoric equipment.

Applications include identification (like DataMatrix code, bar code or character recognition), pose determination (e.g. for robot guidance), assembly checks, gauging up to very high accuracy, surface inspection, color identification, etc.

In industrial production, a machine vision system is typically acting under the control and supervision of a machine control system, usually a PLC. There are many variations to this setup, depending on the type of product to be processed, e.g. individual parts or reel material, the organization of production etc.

A common situation in the production of individual work pieces is that a PLC informs the machine vision system about the arrival of a new part by sending a start signal, then waits until the machine vision system has answered with a result of some kind, e.g. a quality information (passed/failed), a measurement value (size), a position information (x- and y-coordinates, rotation, possibly z-coordinate, or full pose in the case of a 3D system), and then continues processing the work piece based on the information given by the vision system. Traditionally, the interfaces used for communication between a PLC and a machine vision system are digital I/O, the various types of field buses and industrial Ethernet systems on the market and also simply Ethernet for the transmission of bulk data.

Figure 1 gives a generalized view on a machine vision system in the context of this companion specification. It assumes that there is some machine vision framework responsible for the acquisition and processing of the images. This framework is completely implementation specific to the system and is outside the scope of this companion specification.

This underlying system is currently presented to the “outside world”, e.g. the PLC, by various interfaces like digital I/O or field bus, typically using vendor specific protocol definitions. The interface described in this specification may co-exist with these interfaces and offer an additional view on the system or it may be used as the only interface to the system, depending on the requirements of the particular application.

The system may also be exposed through OPC UA interfaces according to other companion specifications, for example, DataMatrix code readers are by their nature machine vision systems but can also be exposed as systems adhering to the Auto ID specification. And system vendors can of course add their own OPC UA interfaces.

This companion specification provides a particular abstraction of a system envisioned to be running in an automated production environment where “automated” is meant in a very broad sense. A test bank for analyzing individual parts can be viewed as automated in that the press of a button by the operator starts the task of the machine vision system.

This abstraction may reflect the inner workings of the machine vision framework or it may be a layer on top of the framework presenting a view of it which is only loosely related to its interior construction.

The basic assumption of the model is that a machine vision system in a production environment goes through a sequence of states which are of interest to and may be influenced by the outside world.

Therefore, a core element of this companion specification is a state machine view of the machine vision system.

Also, a machine vision system may require information from the outside world, in addition to the information it gathers itself by image acquisition, e.g. information about the type of product to be processed. And it will typically pass information to the outside world, e.g. results from the processing.

Therefore, in addition to the state machine, a set of methods and data types is required to allow for this flow of information. Due to the diverse nature of machine vision systems and their applications, these data types will have to allow for vendor- and application-specific extensions.

The intention of the state machine, the methods, as well as the data types, is to provide a framework allowing for standardized integration of machine vision systems into automated production systems, and guidance for filling in the application-specific areas.

Of course, vendors will always be able to extend this specification and provide additional services according to the specific capabilities of their systems and the particular applications.

 

Figure 1 – System model for OPC Machine Vision

4.2      Introduction to OPC Unified Architecture

4.2.1      What is OPC UA?

OPC UA is an open and royalty free set of standards designed as a universal communication protocol. While there are numerous communication solutions available, OPC UA has key advantages:

      A state of art security model (see OPC 10000-2).

      A fault tolerant communication protocol.

      An information modelling framework that allows application developers to represent their data in a way that makes sense to them.

OPC UA has a broad scope which delivers for economies of scale for application developers. This means that a larger number of high quality applications at a reasonable cost are available. When combined with semantic models such as OPC Machine Vision, OPC UA makes it easier for end users to access data via generic commercial applications.

The OPC UA model is scalable from small devices to ERP systems. OPC UA Servers process information locally and then provide that data in a consistent format to any application requesting data - ERP, MES, PMS, Maintenance Systems, HMI, Smartphone or a standard Browser, for examples. For a more complete overview see OPC 10000-1.

4.2.2      Basics of OPC UA

As an open standard, OPC UA is based on standard internet technologies, like TCP/IP, HTTP, Web Sockets.

As an extensible standard, OPC UA provides a set of Services (see OPC 10000-4) and a basic information model framework. This framework provides an easy manner for creating and exposing vendor defined information in a standard way. More importantly all OPC UA Clients are expected to be able to discover and use vendor-defined information. This means OPC UA users can benefit from the economies of scale that come with generic visualization and historian applications. This specification is an example of an OPC UA Information Model designed to meet the needs of developers and users.

OPC UA Clients can be any consumer of data from another device on the network to browser based thin clients and ERP systems. The full scope of OPC UA applications is shown in Figure 2.

Figure 2 – The Scope of OPC UA within an Enterprise

OPC UA provides a robust and reliable communication infrastructure having mechanisms for handling lost messages, failover, heartbeat, etc. With its binary encoded data, it offers a high-performing data exchange solution. Security is built into OPC UA as security requirements become more and more important especially since environments are connected to the office network or the internet and attackers are starting to focus on automation systems.

4.2.3      Information modelling in OPC UA

4.2.3.1      Concepts

OPC UA provides a framework that can be used to represent complex information as Objects in an AddressSpace which can be accessed with standard services. These Objects consist of Nodes connected by References. Different classes of Nodes convey different semantics. For example, a Variable Node represents a value that can be read or written. The Variable Node has an associated DataType that can define the actual value, such as a string, float, structure etc. It can also describe the Variable value as a variant. A Method Node represents a function that can be called. Every Node has a number of Attributes including a unique identifier called a NodeId and non-localized name called as BrowseName. An Object representing a ‘Reservation’ is shown in Figure 3.

Figure 3 – A Basic Object in an OPC UA Address Space

Object and Variable Nodes represent instances and they always reference a TypeDefinition (ObjectType or VariableType) Node which describes their semantics and structure. Figure 4 illustrates the relationship between an instance and its TypeDefinition.

The type Nodes are templates that define all of the children that can be present in an instance of the type. In the example in Figure 4 the PersonType ObjectType defines two children: First Name and Last Name. All instances of PersonType are expected to have the same children with the same BrowseNames. Within a type the BrowseNames uniquely identify the children. This means Client applications can be designed to search for children based on the BrowseNames from the type instead of NodeIds. This eliminates the need for manual reconfiguration of systems if a Client uses types that multiple Servers implement.

 

OPC UA also supports the concept of sub-typing. This allows a modeller to take an existing type and extend it. There are rules regarding sub-typing defined in OPC 10000-3, but in general they allow the extension of a given type or the restriction of a DataType. For example, the modeller may decide that the existing ObjectType in some cases needs an additional Variable. The modeller can create a subtype of the ObjectType and add the Variable. A Client that is expecting the parent type can treat the new type as if it was of the parent type. Regarding DataTypes, subtypes can only restrict. If a Variable is defined to have a numeric value, a sub type could restrict it to a float.

Figure 4 – The Relationship between Type Definitions and Instances

References allow Nodes to be connected in ways that describe their relationships. All References have a ReferenceType that specifies the semantics of the relationship. References can be hierarchical or non-hierarchical. Hierarchical references are used to create the structure of Objects and Variables. Non-hierarchical are used to create arbitrary associations. Applications can define their own ReferenceType by creating subtypes of an existing ReferenceType. Subtypes inherit the semantics of the parent but may add additional restrictions. Figure 5 depicts several References, connecting different Objects.

Figure 5 – Examples of References between Objects

The figures above use a notation that was developed for the OPC UA specification. The notation is summarized in Figure 6. UML representations can also be used; however, the OPC UA notation is less ambiguous because there is a direct mapping from the elements in the figures to Nodes in the AddressSpace of an OPC UA Server.

Figure 6 – The OPC UA Information Model Notation

A complete description of the different types of Nodes and References can be found in OPC 10000-3 and the base structure is described in OPC 10000-5.

OPC UA specification defines a very wide range of functionality in its basic information model. It is not expected that all Clients or Servers support all functionality in the OPC UA specifications. OPC UA includes the concept of Profiles, which segment the functionality into testable certifiable units. This allows the definition of functional subsets (that are expected to be implemented) within a companion specification. The Profiles do not restrict functionality, but generate requirements for a minimum set of functionality (see OPC 10000-7)

4.2.3.2      Namespaces

OPC UA allows information from many different sources to be combined into a single coherent AddressSpace. Namespaces are used to make this possible by eliminating naming and id conflicts between information from different sources. Namespaces in OPC UA have a globally unique string called a NamespaceUri and a locally unique integer called a NamespaceIndex. The NamespaceIndex is only unique within the context of a Session between an OPC UA Client and an OPC UA Server. The Services defined for OPC UA use the NamespaceIndex to specify the Namespace for qualified values.

There are two types of values in OPC UA that are qualified with Namespaces: NodeIds and QualifiedNames. NodeIds are globally unique identifiers for Nodes. This means the same Node with the same NodeId can appear in many Servers. This, in turn, means Clients can have built in knowledge of some Nodes. OPC UA Information Models generally define globally unique NodeIds for the TypeDefinitions defined by the Information Model.

QualifiedNames are non-localized names qualified with a Namespace. They are used for the BrowseNames of Nodes and allow the same names to be used by different information models without conflict. TypeDefinitions are not allowed to have children with duplicate BrowseNames; however, instances do not have that restriction.

4.2.3.3      Companion Specifications

An OPC UA companion specification for an industry specific vertical market describes an Information Model by defining ObjectTypes, VariableTypes, DataTypes and ReferenceTypes that represent the concepts used in the vertical market, and potentially also well-defined Objects as entry points into the AddressSpace.

 


 

5       Use cases

A vision system assesses situations automatically through machine vision and machine evaluation. This document describes how a vision system is addressed via OPC UA and integrated in a superordinate or a peer to peer structure. The description covers all aspects relevant for operation.

 

Interaction of the client with the vision system

A vision system usually has the role of an OPC UA server, i.e. its states are exposed via an OPC UA server. This is what in this specification is described and defined.

The client system can control the vision system via OPC UA. The vision system may also be controlled by a different entity through a different interface.

 

The vision system reports important events – such as evaluation results and error states – automatically to a subscribed client.

However, the client can query data from the vision system at any time.

 

State Machine

The state machine model is an abstraction of a machine vision system, which maps the possible operational states of the machine vision system to a state model with a fixed number of states.

Each interaction of the client system with the vision system depends on the current state of the model and also the state and capabilities of the underlying vision system.

State changes are initiated by method calls from the client system or triggered by internal or external events. They may also be triggered by a secondary interface. Each state change is communicated to the client system.

The state machine is described in more detail in Section 8.

 

Recipe Management

The properties, procedures and parameters that describe a machine vision task for the vision system are stored in a recipe.

Usually there are multiple usable recipes on a vision system.

This specification provides methods for activating, loading, and saving recipes.

Recipes are handled as binary objects. The interpretation of a recipe is not part of this specification.

For a detailed description of Recipe Management, please refer to B.1.

 

Result Transfer

The image processing results are transmitted to the client system asynchronously. This transmission includes information on product assignment, times, and statuses.

The detailed data format of a result is not included in this specification.

 

Error Management

There is an interface for error notification and interactive error management.


 

6       OPC Machine Vision information model overview

Figure 7 shows the main objects types and the relations between them.

 

Figure 7 – Overview of the OPC Machine Vision information model


 

7       ObjectTypes for the Vision System in General

7.1      VisionSystemType

This ObjectType defines the representation of a machine vision system. Figure 8 shows the hierarchical structure and details of the composition. It is formally defined in Table 10.

Instances of this ObjectType provide a general communication interface for a machine vision system. This interface makes it possible to interact with this system independent of the knowledge of the internal structure and the underlying processes of the machine vision system.

System behavior is modeled with a mandatory hierarchical finite state machine.

VisionSystemType contains four optional management objects, RecipeManagement, ConfigurationManagement, ResultManagement, and SafetyStateManagement. All of these provide access to the exposed functionality of the machine vision system.

 

Figure 8 – Overview VisionSystemType


 

Table 10 – Definition of VisionSystemType

Attribute

Value

BrowseName

VisionSystemType

IsAbstract

False

References

Node
Class

BrowseName

DataType

TypeDefinition

Modelling
Rule

Subtype of the BaseObjectType defined in OPC 10000-5

HasComponent

Object

ConfigurationManagement

--

ConfigurationManagementType

Optional

HasComponent

Object

RecipeManagement

--

RecipeManagementType

Optional

HasComponent

Object

ResultManagement

--

ResultManagementType

Optional

HasComponent

Object

SafetyStateManagement

--

SafetyStateManagementType

Optional

HasComponent

Object

VisionStateMachine

--

VisionStateMachineType

Mandatory

HasComponent

Variable

DiagnosticLevel

UInt16

BaseDataVariableType

Optional

HasComponent

Variable

SystemState

SystemStateDescription
DataType

BaseDataVariableType

Optional

 

ConfigurationManagement providesConfigurationManagement provides methods and properties required for Section 7.2.

RecipeManagement provides functionality to add, remove, prepare, and retrieve vision system recipes. RecipeManagementType is described in Section 7.5.

ResultManagement provides methods and properties necessary for managing the results. ResultManagementType is described in Section 7.10.

SafetyStateManagement provides functionality to inform the vision system about the change of an external safety state. SafetyStateManagementType is described in Section 7.13.

StateMachine provides information about the current state of the vision system and methods for controlling it. VisionStateMachineType is defined in Section 8.2.

DiagnosticLevel specifies the threshold for the severity of diagnostic messages to be generated by the server. More information can be found in Section 11.3.

SystemState represents the system state in terms of the SEMI E10 standard. More information can be found in Section11.6.

7.2      ConfigurationManagementType

7.2.1      Overview

This ObjectType defines the representation of the machine vision system configuration management. Figure 9 shows the hierarchical structure and details of the composition. It is formally defined in Table 11.

Even supposedly identical vision systems will differ in some details. In order to produce the same results the vision systems have to be adjusted individually e.g. calibrated. Within this document, the set of all parameters that are needed to get the system working is called a configuration. Configurations can be used to align different vision systems that have the same capabilities, so that these systems produce the same results for the same recipes.

Instances of this ObjectType handle all configurations that are exposed by the system. Only one configuration can be active at a time. This active configuration affects all recipes used in the machine vision system. The configurations can optionally also be exposed in a folder, in order to provide access to the client.

Configurations are handled as files, meta data of configurations can be directly viewed but not changed by the client. The interpretation of the configuration’s content is not part of this specification.

Figure 9 – Overview ConfigurationManagementType

 

Table 11 – Definition of ConfigurationManagementType

Attribute

Value

BrowseName

ConfigurationManagementType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the BaseObjectType defined in OPC 10000-5

HasComponent

Object

ConfigurationTransfer

--

ConfigurationTransferType

Optional

HasComponent

Object

Configurations

--

ConfigurationFolderType

Optional

HasComponent

Method

AddConfiguration

--

--

Optional

HasComponent

Method

GetConfigurationList

--

--

Mandatory

HasComponent

Method

GetConfigurationById

--

--

Mandatory

HasComponent

Method

ReleaseConfigurationHandle

--

--

Optional

HasComponent

Method

RemoveConfiguration

--

--

Optional

HasComponent

Method

ActivateConfiguration

--

--

Mandatory

HasComponent

Variable

ActiveConfiguration

ConfigurationDataType

BaseDataVariableType

Mandatory

 

ConfigurationTransfer is an instance of the ConfigurationTransferType defined in Section 7.4 and it is used to transfer the contents of a configuration by the temporary file transfer method defined in OPC 10000-5, Annex C.4.

Configurations is an instance of the ConfigurationFolderType and it is used to organize variables of DataType ConfigurationDataType which is defined in Section 12.9. If the server chooses to expose configuration information in the Address Space, the Object may contain the set of all configurations available on the system. This is implementation-defined. If a server does not expose configuration information in the Address Space, this Object is expected to be non-existent.

The DataTypes used in the ConfigurationManagementType are defined in OPC 10000-5 and in Section 11.6 of this specification.

7.2.2      ConfigurationManagementType methods

7.2.2.1      AddConfiguration

7.2.2.1.1     Overview

This method is used to add a configuration to the configuration management of the vision system. It concerns itself only with the metadata of the configuration, the actual content is transferred by an object of ConfigurationTransferType which is defined in Section 7.4.

The intended behavior of this method for different input arguments is described in the following subsections.

Signature

AddConfiguration (
[in]   ConfigurationIdDataType   externalId
[out]  ConfigurationIdDataType   internalId
[out]  NodeId                    configuration
[out]  Boolean                   transferRequired
[out]  Int32                     error);

 

Table 12 – AddConfiguration Method Arguments

Argument

Description

externalId

Identification of the configuration used by the environment. This argument must not be empty.

internalId

System-wide unique ID for identifying a configuration. This ID is assigned by the vision system.

configuration

If the server chooses to represent the configuration in the Address Space, it shall return the NodeId of the newly created entry in the Configurations variable here.

If the server uses only method-based configuration management, this shall be a null NodeId as defined in OPC 10000-3.

transferRequired

In this argument, the server returns whether the vision system assumes that a transfer of the file content of the configuration is required.

Note that this is only a hint for the client. If the server returns TRUE, the client will have to assume that the vision system needs the configuration content and shall transfer it. If the server returns FALSE, the client may transfer the configuration content anyway.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 13 – AddConfiguration Method AddressSpace Definition

Attribute

Value

BrowseName

AddConfiguration

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.2.2.1.2     New ExternalId

If AddConfiguration is called with an ExternalId not yet existing in the configuration management of the vision system, it is expected that the vision system creates an appropriate management structure with an InternalId which is unique on the system. The server then returns this InternalId.

If the server chooses to represent all or selected configurations in the Address Space and if the new configuration matches the current selection criteria, the server shall create a new entry in the Configurations folder in the Address Space.

The method will return TRUE in the TransferRequired argument. Since the ExternalId does not yet exist in the configuration management of the vision system, it is expected the configuration does not yet exist either in the local configuration storage of the vision system, and therefore needs to be transferred.

7.2.2.1.3     Identically Existing ExternalId with identical configuration

If AddConfiguration is called with an ExternalId already existing in the configuration management of the vision system, it is expected that the vision system checks whether an identical version of the configuration already exists, provided that the content of the ExternalId allows for such a check. (A way to perform this comparison without having to download the binary content first is offered by the optional hash value in the ExternalId. The idea is that the client computes a hash for the contents of the recipe and passes that hash in the ExternalId. The server can then check this hash against a hash transmitted earlier, or it can compute a hash by itself over the contents of the recipe currently stored on the vision system side. For this procedure, the server needs to know the hash algorithm used by the client which can be transmitted in the hashAlgorithm member of the ExternalId).

Note that the method has no way of checking this with the actual configuration content which is not yet known to the vision system.

The method will return FALSE in the TransferRequired argument if the method comes to the conclusion that the configuration already exists with identical content on the vision system. Note that the result is not binding for the client who may decide to transfer the configuration content anyway.

If the server represents configurations in the Address Space, no new entry shall be created in the configurations folder.

7.2.2.1.4     Identically Existing ExternalId with different configuration

If AddConfiguration comes to the conclusion that the content of the configuration to be transferred is different from the content already existing for this ExternalId, it shall return TRUE in the TransferRequired argument.

The behavior with respect to the management of the configuration metadata and configuration content is entirely application-defined. The vision system may decide to create a new management structure and add the configuration content to the local configuration store, or it may decide to re-use the existing ExternalId and overwrite the configuration content. In any case, the vision system shall create a new, system-wide unique InternalId for this configuration.

If the server chooses to represent configurations in the Address Space, the behavior with respect to these objects should mirror the behavior of the vision system in its internal configuration management.

7.2.2.1.5     Local creation or editing of configurations

This is not, strictly speaking, a use case of the method AddConfiguration, but results are comparable, and therefore the use case is described here.

If a configuration is created locally on the vision system or is loaded onto the vision system by a different interface than the OPC Machine Vision interface, i.e. the configuration is added without using method AddConfiguration, then this configuration shall have a system-wide unique InternalId, just like a configuration added through the method.

If an existing configuration which was uploaded to the vision system through the method AddConfiguration, is locally changed, the ExternalId shall be removed from the changed version and it shall receive a new system-wide unique InternalId so that the two configurations cannot be confused. The vision system may record the history from which configuration it was derived.

If the server exposes configurations in the Address Space and if the locally created or edited configurations match the current filter criteria, then they shall be represented as nodes in the Configurations folder, with their system-wide unique InternalIds, but without ExternalIds.

 

7.2.2.2      GetConfigurationById

This method is used to get the metadata for one configuration from a number of configurations.

Signature

GetConfigurationById (
[in]   ConfigurationIdDataType   internalId
[in]   Int32                     timeout

[out]  Handle                    configurationHandle
[out]  ConfigurationDataType     configuration
[out]  Int32                     error);

 

Table 14 – GetConfigurationById Method Arguments

Argument

Description

internalId

Identification of the configuration used by the vision system. This argument must not be empty.

Timeout

With this argument the client can give a hint to the server how long it will need access to the configuration data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

configurationHandle

The client can use the handle returned by the server to call the ReleaseConfigurationHandle method to indicate to the server that it has finished processing the configuration data, allowing the server to optimize its resource management.

If the server does not support the ReleaseConfigurationHandle method, this value shall be 0.

The client cannot rely on the data being available until ReleaseConfigurationHandle is called.

configuration

Requested configuration.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 15 – GetConfigurationById Method AddressSpace Definition

Attribute

Value

BrowseName

GetConfigurationById

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.2.2.3      GetConfigurationList

This method is used to get a list of all configurations. It concerns itself only with the metadata of the configuration, the actual content is transferred by a ConfigurationTransferType object.

Signature

GetConfigurationList (
[in]   UInt32                    maxResults
[in]   UInt32                    startIndex
[in]   Int32                     timeout
[out]  Boolean                   isComplete
[out]  UInt32                    resultCount
[out]  Handle                    configurationHandle
[out]  ConfigurationDataType[]   configurationList
[out]  Int32                     error);

 

Table 16 – GetConfigurationList Method Arguments

Argument

Description

maxResults

Maximum number of configurations to return in one call; by passing 0, the client indicates that it does not put a limit on the number of configurations.

startIndex

Shall be 0 on the first call, multiples of maxResults on subsequent calls to retrieve portions of the entire list, if necessary.

timeout

With this argument the client can give a hint to the server how long it will need access to the configuration data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

isComplete

Indicates whether there are more configurations in the entire list than retrieved according to startIndex and resultCount.

resultCount

Gives the number of valid results in configurationList.

configurationHandle

The server shall return to each client requesting configuration data a system-wide unique handle identifying the configuration set / client combination. The handle spans continuation calls, so on every call by the same client where startIndex is not 0, the same handle shall be returned.

This handle canbe used by the client in a call to the ReleaseConfigurationHandle method, thereby indicating to the server that it has finished processing the configuration set, allowing the server to optimize its resource management.

The client cannot rely on the data being available until the ReleaseConfigurationHandle method is called.

If the server does no support ReleaseConfigurationHandle, this value shall be 0.

configurationList

List of configurations.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 17 – GetConfigurationList Method AddressSpace Definition

Attribute

Value

BrowseName

GetConfigurationList

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

The following cases must be considered with the respect to the number of available configurations:

              The number of configurations to be returned is less or equal to maxResults; the first call, with startIndex=0, returns isComplete=TRUE, so the client knows that no further calls are necessary. resultCount gives the number of valid elements in the configurationList array.

              The number of configurations to be returned is larger than maxResults; the first N calls (N > 0 with N ≤ (number of configurations) divisor MaxResults), with startIndex=(N-1)*maxResults, return isComplete=FALSE, so the client knows that further calls are necessary. The following call returns isComplete=TRUE, so the client knows, no further calls are necessary. resultCount gives the number of valid elements in the configurationList array (on each call, so on the first N calls, this should be maxResults).

 

7.2.2.4      ReleaseConfigurationHandle

This method is used to inform the server that the client has finished processing a given configuration set allowing the server to free resources managing this configuration set.

The server should keep the data of the configuration set available for the client until the ReleaseConfigurationHandle method is called or until a timeout given by the client has expired. However, the server is free to release the data at any time, depending on its internal resource management, so the client cannot rely on the data being available. ReleaseConfigurationHandle is merely a hint allowing the server to optimize its internal resource management. For timeout usage, see the description in Section 7.2.2.2.

Signature

ReleaseConfigurationHandle (
[in]   Handle      configurationHandle
[out]  Int32       error);

 

Table 18 – ReleaseConfigurationHandle Method Arguments

Argument

Description

configurationHandle

Handle returned by GetConfigurationById or GetConfigurationList, identifying the configuration set/client combination.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 19 – ReleaseConfigurationHandle Method AddressSpace Definition

Attribute

Value

BrowseName

ReleaseConfigurationHandle

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.2.2.5      RemoveConfiguration

This method is used to remove a configuration from the configuration management of the vision system.

Application Note:

It may be required from a vision system – e.g. in pharmaceutical or other safety-critical applications – to keep a record of the prior existence of a removed configuration. This may be important in such systems for the meta information of results that were generated while the removed configuration was active. It serves to keep it comprehensible which configurations were available on the vision system

Signature

RemoveConfiguration (
[in]   ConfigurationIdDataType   internalId
[out]  Int32                     error);

 

Table 20 – RemoveConfiguration Method Arguments

Argument

Description

internalId

Identification of the configuration used by the vision system. This argument must not be empty.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 21 – RemoveConfiguration Method AddressSpace Definition

Attribute

Value

BrowseName

RemoveConfiguration

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.2.2.6      ActivateConfiguration

This method is used to activate a configuration from the configuration management of the vision system.

Since only a single configuration can be active at any time, this method shall deactivate any other configuration which may currently be active. From that point on until the next call to this method the vision system will conduct its operation according to the settings of the activated configuration.

Note that there is no way to deactivate a configuration except by activating another one to avoid having no active configuration on the system.

Signature

ActivateConfiguration (
[in]   ConfigurationIdDataType   internalId
[out]  Int32                     error);

 

Table 22 – ActivateConfiguration Method Arguments

Argument

Description

internalId

Identification of the configuration used by the vision system. This argument must not be empty.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

 

Table 23 – ActivateConfiguration Method AddressSpace Definition

Attribute

Value

BrowseName

ActivateConfiguration

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.3      ConfigurationFolderType

This ObjectType is a subtype of the FolderType and is used to organize the configurations of a vision system. Figure 10 shows the hierarchical structure and details of the composition. It is formally defined in Table 24.

Instances of this ObjectType organize all available configurations of the vision system, which the server decides to expose in the Address Space. It may contain no configuration if no configuration is available, if the server does not expose configurations in the Address Space at all, or if no configuration matches the criteria of the server for exposure in the Address Space.

Note that the folder contains only metadata of the configurations, not the actual configuration data of the vision system.

 

Figure 10 – Overview ConfigurationFolderType

 

Table 24 – Definition of ConfigurationFolderType

Attribute

Value

BrowseName

ConfigurationFolderType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the FolderType defined in OPC 10000-5

HasComponent

Variable

<Configuration>

ConfigurationDataType

BaseDataVariableType

OptionalPlaceholder

 

The ConfigurationDataType used in the ConfigurationFolderType is defined in Section 12.12.

7.4      ConfigurationTransferType

7.4.1      Overview

This ObjectType is a subtype of the TemporaryFileTransferType defined in OPC 10000-5 and is used to transfer configuration data as a file.

The ConfigurationTransferType overwrites the Methods GenerateFileForRead and GenerateFileForWrite to specify the concrete type of the generateOptions Parameter of the Methods.

Figure 11 shows the hierarchical structure and details of the composition. It is formally defined in Table 25.

 

Figure 11 – Overview ConfigurationTransferType

 

 

Table 25 – Definition of ConfigurationTransferType

Attribute

Value

BrowseName

ConfigurationTransferType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the TemporaryFileTransferType defined in OPC 10000-5

HasComponent

Method

0:GenerateFileForRead

--

--

Mandatory

HasComponent

Method

0:GenerateFileForWrite

--

--

Mandatory

 

7.4.2      ConfigurationTransferType methods

7.4.2.1      GenerateFileForRead

This method is used to start the read file transaction. A successful call of this method creates a temporary FileType Object with the file content and returns the NodeId of this Object and the file handle to access the Object.

Signature

GenerateFileForRead (
[in]   ConfigurationTransferOptions   generateOptions
[out]  NodeId                         fileNodeId
[out]  UInt32                         fileHandle
[out]  NodeId                         completionStateMachine);

 

Table 26 – GenerateFileForRead Method Arguments

Argument

Description

generateOptions

The structure used to define the generate options for the file.

fileNodeId

NodeId of the temporary file

fileHandle

The FileHandle of the opened TransferFile.

The FileHandle can be used to access the TransferFile methods Read and Close.

completionStateMachine

If the creation of the file is completed asynchronously, the parameter returns the NodeId of the corresponding FileTransferStateMachineType Object.

If the creation of the file is already completed, the parameter is null.

If a FileTransferStateMachineType object NodeId is returned, the Read Method of the file fails until the TransferState changed to ReadTransfer.

 

Table 27 – GenerateFileForRead Method AddressSpace Definition

Attribute

Value

BrowseName

GenerateFileForRead

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.4.2.2      GenerateFileForWrite

This method is used to start the write file transaction. A successful call of this method creates a temporary FileType Object with the file content and returns the NodeId of this Object and the file handle to access the Object.

 

Signature

GenerateFileForWrite (
[in]   ConfigurationTransferOptions   generateOptions
[out]  NodeId                         fileNodeId
[out]  UInt32                         fileHandle);

 

Table 28 – GenerateFileForWrite Method Arguments

Argument

Description

generateOptions

The structure used to define the generate options for the file.

fileNodeId

NodeId of the temporary file.

fileHandle

The fileHandle of the opened TransferFile.

The fileHandle can be used to access the TransferFile methods Write and CloseAndCommit.

 

Table 29 – GenerateFileForWrite Method AddressSpace Definition

Attribute

Value

BrowseName

GenerateFileForWrite

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.5      RecipeManagementType

7.5.1      Overview

This ObjectType defines the representation of the machine vision system recipe management (for a conceptual overview of recipe management see Section B.1, for a definition of recipes itself, see Section B.1.2.1). Figure 12 shows the hierarchical structure and details of the composition. It is formally defined in Table 30.

For the actual data transfer, RecipeManagementType makes use of the RecipeTransferType, derived from the TemporaryFileTransferType defined in OPC 10000-5, beginning with version 1.04.

If the server chooses to expose recipe data in the Address Space (see Section B.1.3.3) using the Recipes folder of this type, the FileType object component of the RecipeType objects in this folder can also be used directly for the data transfer.


 

Figure 12 – Overview RecipeManagementType

 

Table 30 – Definition of RecipeManagementType

Attribute

Value

BrowseName

RecipeManagementType

IsAbstract

False

References

Node Class

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the BaseObjectType defined in OPC 10000-5

HasComponent

Method

AddRecipe

--

--

Optional

HasComponent

Method

PrepareRecipe

--

--

Mandatory

HasComponent

Method

UnprepareRecipe

--

--

Mandatory

HasComponent

Method

GetRecipeListFiltered

--

--

Mandatory

HasComponent

Method

ReleaseRecipeHandle

--

--

Optional

HasComponent

Method

RemoveRecipe

--

--

Optional

HasComponent

Method

PrepareProduct

--

--

Optional

HasComponent

Method

UnprepareProduct

--

--

Optional

HasComponent

Method

UnlinkProduct

--

--

Optional

HasComponent

Object

RecipeTransfer

--

RecipeTransferType

Optional

HasComponent

Object

Recipes

--

RecipeFolderType

Optional

HasComponent

Object

Products

--

ProductFolderType

Optional

 

RecipeTransfer is an instance of the RecipeTransferType defined in Section 7.6 and it is used to transfer the contents of a recipe by the temporary file transfer method defined in OPC 10000-5, Annex C.4.

Recipes is an instance of the RecipeFolderType that organizes RecipeType objects, if the server chooses to expose recipe information in the Address Space. In this case, it may contain the set of all recipes available on the system or a filtered subset, e.g. the set of all currently prepared recipes. This is implementation-defined. If a server does not expose recipe information in the Address Space, this folder is expected to be non-existent.

Products is an instance of the ProductFolderType that organizes ProductDataType variables, if the server chooses to expose product information in the Address Space. In this case, it may contain the set of all products available on the system or a filtered subset, e.g. the set of all products for which recipes are currently prepared. This is implementation-defined. If a server does not expose product information in the Address Space, this folder is expected to be non-existent.

7.5.2      RecipeManagementType Methods

7.5.2.1      AddRecipe

7.5.2.1.1     Overview

This method is used to add a recipe to the recipe management of the vision system. It concerns itself only with the metadata of the recipe, the actual content is transferred by a RecipeTransferType object.

The intended behavior of this method for different input arguments is described in the following subsections.

Signature

AddRecipe (
[in]   RecipeIdExternalDataType    externalId
[in]   ProductIdDataType           productId
[out]  RecipeIdInternalDataType    internalId
[out]  NodeId                      recipe
[out]  NodeId                      product
[out]  Boolean                     transferRequired
[out]  Int32                       error);

Table 31 – AddRecipe Method Arguments

Argument

Description

externalId

Identification of the recipe used by the environment. This argument must not be empty.

productId

Identification of a product the recipe is to be used for.

This argument may be empty.

internalId

Internal identification of the recipe. This identification shall be system-wide unique and must be returned.

recipe

If the server chooses to represent the recipe in the Address Space, it shall return the NodeId of the newly created entry in the Recipes folder here.

If the server uses only method-based recipe management, this shall be null.

Note that, even if the server uses the Recipes folder to expose recipe data in the Address Space, this may be empty, if the newly created recipe does not fit the selection criteria of the server for the entries in this folder.

product

If the server chooses to represent product information in the Address Space, it shall return the NodeId of a newly created entry in the Products folder here.

If the server uses only method-based recipe management, this shall be null.

Note that, even if the server uses the Products folder to expose product data in the Address Space, this may be null if the newly created product does not fit the selection criteria of the server for the entries in the Products folder.

transferRequired

In this argument, the server returns whether the vision system assumes that a transfer of the file content of the recipe is required.

Note that this is only a hint for the client. If the server returns TRUE, the client will have to assume that the vision system needs the recipe content and shall transfer it. If the server returns FALSE, the client may transfer the recipe content anyway.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

 

Table 32 – AddRecipe Method AddressSpace Definition

Attribute

Value

BrowseName

AddRecipe

References

Node Class

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.5.2.1.2     New ExternalId

If AddRecipe is called with an ExternalId not yet existing in the recipe management of the vision system, it is expected that the vision system creates an appropriate management structure with an InternalId which is system-wide unique. The server may then return this InternalId, however the client cannot rely on this.

If the server chooses to represent all or selected recipes in the Address Space and if the new recipe matches the current selection critieria, the server shall create a new entry in the Recipes folder in the Address Space.

The method will return TRUE in the TransferRequired argument. Since the ExternalId does not yet exist in the recipe management of the vision system, it is expected that the recipe content does not yet exist either in the local recipe storage of the vision system, and therefore needs to be transferred.

If the ProductId argument is non-empty, it is expected that the vision system creates an appropriate management structure linking the newly created recipe for use with this ProductId. If the ProductId does not yet exist on the vision system, it is expected that it is created.

If the ProductId argument is empty, no such linking takes place. Note that it will not be possible to start a job based on a ProductId not linked to a recipe.

If the server chooses to represent all or selected products in the Address Space and if the ProductId matches the selection criteria, the server shall create a new entry in the Products folder in the Address Space.

If the server chooses to represent all or selected recipes in the Address Space and if the given recipe matches the selection criteria, the ProductId shall be added to the list of products within the appropriate Recipe node.

7.5.2.1.3     Identically Existing ExternalId with identical recipe

If AddRecipe is called with an ExternalId already existing in the recipe management of the vision system, it is expected that the vision system checks whether an identical version of the recipe already exists, provided that the content of the ExternalId allows for such a check (most likely using the hash value).

Note that the method has no way of checking this with the actual recipe content which is not yet known to the vision system.

The method will return FALSE in the TransferRequired argument if the system comes to the conclusion that the recipe already exists with identical content on the vision system. Note that the result is not binding for the client who may decide to transfer the recipe content anyway.

If the server represents recipes in the Address Space, no new entry shall be created in the recipes folder.

The behavior with regard to the ProductId argument is as described above for a new ExternalId. This way of calling AddRecipe can be used to link an existing recipe with another product.

7.5.2.1.4     Identically Existing ExternalId with different recipe

If AddRecipe comes to the conclusion that the content of the recipe to be transferred is different from the content already existing for this ExternalId, it shall return TRUE in the TransferRequired argument.

The behavior with respect to the management of the recipe metadata and recipe content is entirely application-defined. The vision system may decide to create a new management structure with a new InternalId and add the recipe content to the local recipe store, or it may decide to re-use the existing ExternalId and overwrite the recipe content.

If the server chooses to represent recipes in the Address Space, the behavior with respect to these recipe objects should mirror the behavior of the vision system in its internal recipe management

The behavior with regard to the ProductId argument is as described above for a new ExternalId. If the vision system stores both recipe versions, it is implementation-defined whether both are linked to the ProductId or not.

Note that overwriting a recipe shall result in a change to the internalId of the recipe. The change may effect only the hash value, the identifier may remain the same. Historical storage is not required.

7.5.2.1.5     Local creation or editing of recipes

This is not, strictly speaking, a use-case of the method AddRecipe, but results are comparable, therefore, the use-case is described here.

If a recipe is created locally on the vision system or is loaded onto the vision system by a different interface than the OPC Machine Vision interface, i.e. the recipe is added without using the AddRecipe method, then this recipe shall have a system-wide unique InternalId, just like a recipe added through the method.

If an existing recipe which was uploaded to the vision system through AddRecipe is locally changed, the ExternalId shall be removed from the changed version and it shall receive a new system-wide unique InternalId so that the two recipes cannot be confused. Of course the vision system may record the history from which recipe it was derived.

If the server represents recipes in the Address Space and if the locally created or edited recipes match the current filter criteria, then they shall be represented as nodes in the Recipes folder, with their system-wide unique InternalIds, but without ExternalIds.

An important special case is the local editing of an already prepared recipe, described in Section B.1.2.3. Since after local editing, the already prepared recipe is different from before, effectively a new recipe has been prepared by the local editing. Therefore, a new RecipePrepared event shall be generated (see also Section 8.3.8.1).

7.5.2.2      PrepareRecipe

This method is used to prepare a recipe so that it can be used for starting a job on the vision system.

Signature

PrepareRecipe (
[in]   RecipeIdExternalDataType   externalId
[in]   RecipeIdInternalDataType   internalIdIn
[out]  RecipeIdInternalDataType   internalIdOut
[out]  Boolean                    isCompleted
[out]  Int32                      error);

 

Table 33 – PrepareRecipe Method Arguments

Argument

Description

externalId

Identification of the recipe used by the environment which is to be prepared.

internalIdIn

Internal identification of the recipe which is to be prepared.

The client can use either the externalId or the internalIdIn, leaving the other empty. If both are given, the InternalIdIn takes precedence.

internalIdOut

Internal identification of the recipe selected based on the given externalId or internalId.

isCompleted

Flag to indicate that the recipe has been completely prepared before the method returned.

If False, the client needs either to check the properties of the recipe to determine when preparation has completed or wait for the RecipePrepared event.

Error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

 

Table 34 – PrepareRecipe Method AddressSpace Definition

Attribute

Value

BrowseName

PrepareRecipe

References

Node Class

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

If the vision system is in state Initialized, it is expected to change into state Ready after successful preparation of the recipe and be able to execute jobs called by a Start method with the given ExternalId.

If the vision system is already in state Ready when PrepareRecipe is called, it is expected to be in state Ready again after successful preparation of the recipe and be able to execute jobs called by a Start method with the given ExternalId. Depending on the capabilities of the vision system, it may temporarily leave state Ready for state Initialized, then return to Ready, or, if the system is capable of preparing recipes in the background, it may stay in state Ready and react instantaneously to Start jobs for other, already prepared, recipes. Also depending on the capabilities of the vision system, preparing an additional recipe may unprepare others if the number of recipes being prepared at the same time is limited.

The preparation of a recipe may be a time-consuming operation. The client cannot necessarily assume that the recipe is completely prepared when the method returns. The client should therefore check the preparedness of the recipe after a reasonable amount of time or wait for a RecipePrepared event with the correct ExternalId to be fired. During the time required for preparing a recipe, the system may or may not be capable of reacting to a start method. However, the server is free to handle PrepareRecipe as a synchronous method, returning only after the recipe is completely prepared unless an error has occurred.

Not that the local editing of an already prepared recipe, as described in Sections 7.5.2.1.5 and B.1.2.3 is considered to be the same as the preparation of a new recipe, because after local editing, the already prepared recipe is different from before, so effectively a new recipe has been prepared by the local editing. Therefore, a new RecipePrepared event shall be generated (see also Section 8.3.8.1).

Some recipes may exclude each other from being in prepared state at the same time, for example, when there are mechanical movements involved. Having two such recipes prepared at the same time would mean that an instantaneous reaction to calling the Start method for a prepared recipe would not be possible. However, this is at the discretion of the vision system. The client may merely notice an unusually long reaction time between calling the Start method and the actual state change, or the vision system may prevent the simultaneous preparation by returning an error.

If there is more than one recipe with the identical ExternalId – e.g. due to local copying and modifying of recipes on the vision system – it is implementation-defined how this ambiguity will be handled. The vision system will prepare only a single one of these recipes, which may be the latest one or the latest externally defined one.

 

7.5.2.3      UnprepareRecipe

This method is used to revert the preparation of a recipe so that it can no longer be used for starting a job on the vision system.

Signature

UnprepareRecipe (
[in]   RecipeIdExternalDataType    externalId
[in]   RecipeIdInternalDataType    internalIdIn
[out]  RecipeIdInternalDataType    internalIdOut
[out]  Int32                       error);

 

Table 35 – UnprepareRecipe Method Arguments

Argument

Description

externalId

Identification of the recipe used by the environment which is to be un-prepared.

internalIdIn

Internal identification of the recipe which is to be un-prepared.

The client can use either the externalId or the internalIdIn, leaving the other empty. If both are given, the InternalIdIn takes precedence.

internalIdOut

Internal identification of the recipe selected based on a given externalId or internalId. This is for verification by the client.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 36 – UnprepareRecipe Method AddressSpace Definition

Attribute

Value

BrowseName

UnprepareRecipe

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

If the vision system is in Ready state when UnprepareRecipe is called, and the recipe to be unprepared is the only recipe currently prepared, the vision system is expected to change into state Initialized after successful reversion of the preparation of the recipe.

If there are additional recipes in prepared state, the vision system is expected to remain in state Ready and be able to be called by a Start method with one of the remaining prepared recipes. Depending on the capabilities of the vision system, it may temporarily leave state Ready for state Initialized, then return to Ready.

If there is more than one recipe with the identical ExternalId – e.g. due to local copying and modifying of recipes on the vision system – it is implementation-defined how this ambiguity will be handled. However, it is expected that the vision system will handle the ambiguity in the same way as for method PrepareRecipe so that UnprepareRecipe is exactly reciprocal to PrepareRecipe.

7.5.2.4      GetRecipeListFiltered

This method is used to get a list of recipes matching certain filter criteria. It concerns itself only with the metadata of the recipe, the actual content is transferred by a RecipeTransferType object.

Signature

GetRecipeListFiltered (
[in]   RecipeIdExternalDataType    externalId
[in]   ProductIdDataType           productId
[in]   TriStateBooleanDataType     isPrepared
[in]   UInt32                      maxResults
[in]   UInt32                      startIndex
[in]   Int32                       timeout
[out]  Boolean                     isComplete
[out]  UInt32                      resultCount
[out]  Handle                      recipeHandle
[out]  RecipeIdInternalDataType[]  recipeList
[out]  Int32                       error);

 

Table 37 – GetRecipeListFiltered Method Arguments

Argument

Description

externalId

Identification of the recipe used by the environment used as a filter.

productId

Identification of a product, used as a filter.

isPrepared

This argument is used to filter for prepared recipes (for value TRUE_1), non-prepared recipes (for value FALSE_0) or without regard for the preparedness of recipes (for value DONTCARE_2).

maxResults

Maximum number of recipes to return in one call; by passing 0, the client indicates that it does not put a limit on the number of recipes.

startIndex

Shall be 0 on the first call, multiples of maxResults on subsequent calls to retrieve portions of the entire list, if necessary.

Timeout

With this argument the client can give a hint to the server how long it will need access to the configuration data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

isComplete

Indicates whether there are more results in the entire list than retrieved according to startIndex and resultCount.

resultCount

Gives the number of valid results in recipeList.

recipeHandle

The server shall return to each client requesting recipe data a system-wide unique handle identifying the recipe set / client combination. This handle has to be used by the client to release the recipe set.

recipeList

List of InternalIDs matching the filters.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 38 – GetRecipeListFiltered Method AddressSpace Definition

Attribute

Value

BrowseName

GetRecipeListFiltered

References

Node Class

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

The input arguments are used as filters. Strings or TrimmedStrings as arguments or structure components of arguments can contain * and ? to be used as wildcards. Empty elements are considered to match everything, or in other words are not taken into account for filtering at all. The notion of emptiness is defined together with the respective DataTypes.

In RecipeList, the method returns a list of all recipes whose ExternalIds or ProductIds match the filters.

For RecipleList method there are the following cases with the respect to the number of results:

              The number of recipes to be returned according to the filter is less or equal to maxResults; the first call, with nstartIndex=0, returns isComplete=TRUE, so the client knows that no further calls are necessary. resultCount gives the number of valid elements in the recipeList array.

              The number of recipes to be returned is larger than maxResults; the first N calls (N > 0 with N ≤ (number of recipes) divisor MaxResults), with startIndex=(N-1)*maxResults, return isComplete=FALSE, so the client knows that further calls are necessary. The following call returns isComplete=TRUE, so the client knows, no further calls are necessary. resultCount gives the number of valid elements in the recipeList array (on each call, so on the first N calls, this should be maxResults).

 

7.5.2.5      ReleaseRecipeHandle

This method is used to inform the server that the client has finished processing a given recipe set allowing the server to free resources managing this recipe set.

The server should keep the data of the recipe set available for the client until the ReleaseRecipeHandle method is called or until a timeout given by the client has expired. However, the server is free to release the data at any time, depending on its internal resource management, so the client cannot rely on the data being available. ReleaseRecipeHandle is merely a hint allowing the server to optimize its internal resource management. For timeout usage see the description in Section 7.5.2.4.

Signature

ReleaseRecipeHandle (
[in]   Handle       recipeHandle
[out]  Int32        error);

 

Table 39 – ReleaseRecipeHandle Method Arguments

Argument

Description

recipeHandle

Handle returned by GetRecipeById or GetRecipeListFiltered methods, identifying the recipe set/client combination.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 40 – ReleaseRecipeHandle Method AddressSpace Definition

Attribute

Value

BrowseName

ReleaseRecipeHandle

References

Node Class

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.5.2.6      RemoveRecipe

This method is used to remove a recipe from the recipe management of the vision system.

Signature

RemoveRecipe [(
[in]   RecipeIdExternalDataType   externalId
[out]  Int32                      error);

Table 41 – RemoveRecipe Method Arguments

Argument

Description

externalId

Identification of the recipe used by the environment.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 42 – RemoveRecipe Method AddressSpace Definition

Attribute

Value

BrowseName

RemoveRecipe

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

It is expected that the vision system removes the recipe matching the given ExternalId from its management structures. Whether the vision system also removes the actual recipe content is implementation-defined.

Application Note:

The removed recipe may still be referenced by stored results from the vision system. Therefore, it is strongly recommended that the InternalId of a removed recipe is not re-used by the vision system. Otherwise, traceability of results to recipes will be impaired and the vision system may no be able to fulfil certain external requirements, e.g.the FDA Part 11 requirements for pharmaceutical equipment.

However, this standard makes no requirements on the way the vision system creates and maintains its internal management data.

If there is more than one recipe with the identical ExternalId – e.g. due to local copying and modifying of recipes on the vision system – it is implementation-defined how this ambiguity will be handled. For example, the vision system may remove all these recipes or only the externally defined ones or any number of other possibilities.

If the server chooses to represent recipes in the Address Space, the server shall remove the recipe node in the same way as the vision system cleans up its management structures.

7.5.2.7      PrepareProduct

This method is used to prepare a product so that it can be used for starting a job on the vision system.

Signature

PrepareProduct (
[in]   ProductIdDataType           productId
[out]  RecipeIdInternalDataType    internalId
[out]  Int32                       error);

 

Table 43 – PrepareProduct Method Arguments

Argument

Description

productId

Identification of a product, which can be used in a Start method.

internalId

Internal identification of the recipe which is actually being used to work on the product.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 44 – PrepareProduct Method AddressSpace Definition

Attribute

Value

BrowseName

PrepareProduct

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

In effect, the vision system will use a recipe to work on the product. Therefore, the vision system is expected to select a recipe linked to the given ProductId. If more than one recipe is linked to the product, the resolution of this ambiguity is implementation defined.

The vision system shall return the internal identification of the recipe selected. If there is more than one recipe linked to the given ProductId, it is implementation-defined how this ambiguity will be handled. It is expected that the resolution of the ambiguity will be implemented in a systematic manner throughout the vision system.

Since preparing a product is in effect the same as preparing a recipe which has been selected by the vision system on the basis of the ProductId, state handling is identical to the PrepareRecipe method.

7.5.2.8      UnprepareProduct

This method is used to revert the preparation of a product so that it can no longer be used for starting a job on the vision system.

Signature

UnprepareProduct (
[in]   ProductIdDataType          productId
[out]  RecipeIdInternalDataType   internalId
[out]  Int32                      error);

 

Table 45 – UnprepareProduct Method Arguments

Argument

Description

productId

Identification of a product, which is to be unprepared.

internalId

Internal identification of the recipe which is actually unprepared.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 46 – UnprepareProduct Method AddressSpace Definition

Attribute

Value

BrowseName

UnprepareProduct

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

It is expected that the vision system will select a recipe based on the ProductId in the same way as in method PrepareProduct and then unprepare that recipe so that UnprepareProduct is exactly reciprocal to PrepareProduct.

Therefore, state handling is identical to UnprepareRecipe method.

7.5.2.9      UnlinkProduct

This method is used to remove the link between a recipe and a product in the vision system

Signature

UnlinkProduct (
[in]   RecipeIdInternalDataType  internalId
[in]   ProductIdDataType         productId
[out]  Int32                     error);

 

Table 47 – UnlinkProduct Method Arguments

Argument

Description

internalId

Identification of the recipe used by the system.

productId

Identification of a product, the recipe is to be used for.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 48 – UnlinkProduct Method AddressSpace Definition

Attribute

Value

BrowseName

UnlinkProduct

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

It is expected that the vision system removes a link between recipes with the given InternalId and products with the given ProductId from its internal management structures.

UnlinkProduct uses the InternalId to ensure that it is unambiguous which recipe the link is removed from. If need be, the client can get the InternalIds for given ExternalIds using the GetRecipeListFiltered method (7.5.2.4).

Starting jobs based on this ProductId will no longer lead to this recipe being used. If there is no link left between this ProductId and any recipe, it will no longer be possible to start a job based on that ProductId.

If the server chooses to represent recipes in the Address Space, the server shall remove the given ProductId from the appropriate recipe node.

If the server chooses to represent products in the Address Space, and there are no recipes linked to a product anymore, it is expected that the server removes the corresponding product node.

7.6      RecipeTransferType

7.6.1      Overview

This ObjectType is a subtype of TemporaryFileTransferType defined in OPC 10000-5 and is used for transferring a recipe.

The RecipeTransferType overwrites the Methods GenerateFileForRead and GenerateFileForWrite to specify the concrete type of the generateOptions Parameter of the Methods.

Figure 13 shows the hierarchical structure and details of the composition. It is formally defined in Table 49.

 

Figure 13 – RecipeTransferType

 

Table 49 – Definition of RecipeTransferType

Attribute

Value

BrowseName

RecipeTransferType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the TemporaryFileTransferType defined in OPC 10000-5

HasComponent

Method

0:GenerateFileForRead

--

--

Mandatory

HasComponent

Method

0:GenerateFileForWrite

--

--

Mandatory

 

7.6.2      RecipeTransferType Methods

7.6.2.1      GenerateFileForRead

This method is used to start the read file transaction. A successful call of this method creates a temporary FileType Object with the file content and returns the NodeId of this Object and the file handle to access the Object.

Signature

GenerateFileForRead (
[in]   RecipeTransferOptions   generateOptions
[out]  NodeId                  fileNodeId
[out]  UInt32                  fileHandle
[out]  NodeId                  completionStateMachine);

 

Table 50 – GenerateFileForRead Method Arguments

Argument

Description

generateOptions

The structure used to define the generate options for the file, described in Section 12.11.

fileNodeId

NodeId of the temporary file

fileHandle

The fleHandle of the opened TransferFile.

The fileHandle can be used to access the TransferFile methods Read and Close.

completionStateMachine

If the creation of the file is completed asynchronously, the parameter returns the NodeId of the corresponding FileTransferStateMachineType Object.

If the creation of the file is already completed, the parameter is null.

If a FileTransferStateMachineType object NodeId is returned, the Read Method of the file fails until the TransferState changed to ReadTransfer.

 

Table 51 – GenerateFileForRead Method AddressSpace Definition

Attribute

Value

BrowseName

GenerateFileForRead

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.6.2.2      GenerateFileForWrite

This method is used to start the write file transaction. A successful call of this method creates a temporary FileType Object with the file content and returns the NodeId of this Object and the file handle to access the Object.

Signature

GenerateFileForWrite (
[in]   RecipeTransferOptions   generateOptions
[out]  NodeId                  fileNodeId
[out]  UInt32                  fileHandle);

 

Table 52 – GenerateFileForWrite Method Arguments

Argument

Description

generateOptions

The structure used to define the generate options for the file, described in Section 12.11.

fileNodeId

NodeId of the temporary file

fileHandle

The fleHandle of the opened TransferFile.

The fileHandle can be used to access the TransferFile methods Write and CloseAndCommit.

 

Table 53 – GenerateFileForWrite Method AddressSpace Definition

Attribute

Value

BrowseName

GenerateFileForWrite

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.7      RecipeType

7.7.1      Overview

This ObjectType defines the metadata for a recipe and methods for handling individual recipes.

 

Figure 14 – Overview RecipeType

 

Table 54 – Definition of RecipeType

Attribute

Value

BrowseName

RecipeType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the BaseObjectType defined in OPC 10000-5

HasProperty

Variable

ExternalId

RecipeIdExternalDataType

PropertyType

Optional

HasProperty

Variable

InternalId

RecipeIdInternalDataType

PropertyType

Mandatory

HasProperty

Variable

IsPrepared

Boolean

PropertyType

Mandatory

HasProperty

Variable

LastModified

UtcTime

PropertyType

Mandatory

HasProperty

Variable

LinkedProducts

ProductIdDataType[]

PropertyType

Optional

HasComponent

Object

Handle

--

FileType

Optional

HasComponent

Method

LinkProduct

--

--

Optional

HasComponent

Method

UnlinkProduct

--

--

Optional

HasComponent

Method

Prepare

--

--

Mandatory

HasComponent

Method

Unprepare

--

--

Mandatory

 

ExternalId

RecipeId for identifying the recipe outside the vision system. The ExternalId is only managed by the environment.

InternalId

System-wide unique ID for identifying a recipe. This ID is assigned by the vision system.

LastModified

The time, when this recipe was last modified in the recipe store of the vision system. It is assumed that this value is consistent between recipes on the system so that it can be used to order recipes on the system by modification time. As it is possible that the vision system may not be synchronized with a time server, this value may not be valid for comparisons between systems.

LinkedProducts

Array of ProductIds which this recipe is linked to. May be empty.

Handle

FileType object for handling transfer of recipe data between client and server. The data is treated as a binary blob by the server. This method is optional for clients not supporting transfer of the actual recipe contents.

7.7.2      RecipeType Methods

7.7.2.1      Overview

If recipes are exposed in the Address Space, the corresponding entries in the Recipes folder of the RecipeManagement object have to be created using the AddRecipe method of the RecipeManagement object. The recipe object cannot destroy itself as this would affect the data structures of the RecipeManagement object; therefore, removal has to take place using the Remove method of that object.

Operations other than AddRecipe can be carried out directly on the the RecipeType object as well as on he RecipeManagement object.

For data transfer, the FileType object contained in the RecipeType object can be used directly. Therefore, there is no need for a specific Get method.

7.7.2.2      LinkProduct

This method is used to create a link between the recipe and a product in the vision system

Signature

LinkProduct (
[in]   ProductIdDataType  productId
[out]  Int32              error);

 

Table 55 – LinkProduct Method Arguments

Argument

Description

productId

Identification of a product, the recipe is to be used for.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

 

Table 56 – LinkProduct Method AddressSpace Definition

Attribute

Value

BrowseName

LinkProduct

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

It is assumed that the given ProductId already exists in the vision system management structures, e.g. having been created with the AddProduct method of the RecipeManagementType. It is recommended that it also exists in the Products folder of the RecipeManagementType object to expose a consistent set of data in the Address Space.

In the case of a successful link, the server shall add the given ProductId to the LinkedProducts list of this RecipeType object.

The method shall fail if the product does not exist in the vision system management structures.

7.7.2.3      UnlinkProduct

This method is used to remove the link between the recipe and a product in the vision system

Signature

UnlinkProduct (
[in]   ProductIdDataType  productId
[out]  Int32              error);

 

 

Table 57 – UnlinkProduct Method Arguments

Argument

Description

productId

Identification of a product, the recipe is to be used for.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

 

Table 58 – UnlinkProduct Method AddressSpace Definition

Attribute

Value

BrowseName

UnlinkProduct

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

The server shall remove the given ProductId from the LinkedProducts list of this RecipeType object.

It is expected that the vision system removes a link between the recipe represented by this object and products with the given ProductId from its internal management structures.

Starting jobs based on this ProductId will no longer lead to this recipe being used. If there is no link left between this ProductId and any recipe, it will no longer be possible to start a job based on that ProductId.

7.7.2.4      Prepare

This method is used to prepare the recipe so that it can be used for starting a job on the vision system.

Signature

Prepare (
[out]  Boolean    isCompleted
[out]  Int32      error);

Table 59 – Prepare Method Arguments

Argument

Description

isCompleted

Flag to indicate that the recipe has been completely prepared before the method returned.

If False, the client needs either to check the properties of the recipe to determine when preparation has completed or wait for the RecipePrepared event.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 60 – Prepare Method AddressSpace Definition

Attribute

Value

BrowseName

Prepare

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

The effects of the Prepare method of a RecipeType object on the VisionSystem shall be identical to those of the PrepareRecipe method of the RecipeManagementType object.

7.7.2.5      Unprepare

This method is used to revert the preparation of the recipe so that it can no longer be used for starting a job on the vision system.

Signature

Unprepare (
[out]  Int32   error);

Table 61 – Unprepare Method Arguments

Argument

Description

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 62 – Unprepare Method AddressSpace Definition

Attribute

Value

BrowseName

Unprepare

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

The effects of the Unprepare method of a RecipeType object on the VisionSystemAutomaticModeStateMachine shall be identical to those of the UnprepareRecipe method of the RecipeManagementType object.

7.7.2.6      Recipe transfer

There are no dedicated transfer methods on the RecipeType because it already contains a FileType object representing the content of the actual recipe in the vision system. Thus, transfer can be carried out using the standard Open, Read, Write, Close methods of the FileType object.

7.8      RecipeFolderType

This ObjectType is a subtype of the FolderType and is used to organize the recipes of a vision system. Figure 15 shows the hierarchical structure and details of the composition. It is formally defined in Table 63.

Instances of this ObjectType organize all available recipes of the vision system, which the server decides to expose in the Address Space. It may contain no recipe if no recipe is available, if the server does not expose recipes in the Address Space at all, or if no recipe matches the criteria of the server for exposure in the Address Space.

Note that the folder contains only metadata of the recipes, not the actual configuration data of the vision system.

 

Figure 15 – Overview RecipeFolderType

 

Table 63 – Definition of RecipeFolderType

Attribute

Value

BrowseName

RecipeFolderType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the FolderType defined in OPC 10000-5

HasComponent

Object

<Recipe>

--

RecipeType

OptionalPlaceholder

 

The RecipeType used in the RecipeFolderType is defined in Section 7.7.

 

7.9      ProductFolderType

This ObjectType is a subtype of the FolderType and is used to organize the products of a vision system. Figure 16 shows the hierarchical structure and details of the composition. It is formally defined in Table 64.

Instances of this ObjectType organize all available products of the vision system, which the server decides to expose in the Address Space. It may contain no product if no product is available, if the server does not expose products in the Address Space at all, or if no product matches the criteria of the server for exposure in the Address Space.

Note that the folder contains only metadata of the products, not the actual product data of the vision system.

 

Figure 16 – Overview ProductFolderType

 

Table 64 – Definition of ProductFolderType

Attribute

Value

BrowseName

ProductFolderType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the FolderType defined in OPC 10000-5

HasComponent

Variable

<Product>

ProductDataType

BaseDataVariableType

OptionalPlaceholder

 

The ProductDataType used in the ProductFolderType is defined in Section 12.15.

 

7.10    ResultManagementType

7.10.1    Overview

This ObjectType defines the representation of the machine vision system result management. Figure 17 shows the hierarchical structure and details of the composition. It is formally defined in Table 65.

ResultManagementType provides methods to query the results generated by the underlying vision system. Results can be stored in a local result store. An event of ResultReadyEventType, which is defined in Section 8.3.8.4, shall be triggered when the system generates a new result.

 

Figure 17 – Overview ResultManagementType

 

Table 65 – Definition of ResultManagementType

Attribute

Value

BrowseName

ResultManagementType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the BaseObjectType defined in OPC 10000-5

HasComponent

Method

GetResultById

--

--

Mandatory

HasComponent

Method

GetResultComponentsById

--

--

Mandatory

HasComponent

Method

GetResultListFiltered

--

--

Mandatory

HasComponent

Method

ReleaseResultHandle

--

--

Optional

HasComponent

Object

ResultTransfer

--

ResultTransferType

Optional

HasComponent

Object

Results

--

ResultFolderType

Optional

 

ResultTransfer is an instance of the ResultTransferType defined in Section 7.12 and it is used to transfer the contents of a result by the temporary file transfer method defined in OPC 10000-5, Annex C.4.

Results is an Object of the ResultFolderType that organizes variables of the DataType ResultDataType which is defined in Section 12.17. If the server chooses to expose result information in the Address Space, it may contain the set of all results available on the system or a filtered subset, e.g. the set of all currently finished results. This is implementation-defined. If a server does not expose result information in the Address Space, this variable is expected to be non-existent.

7.10.2    ResultManagementType methods

7.10.2.1    GetResultById

This method is used to retrieve a result from the vision system. Depending on the design of the vision system, the client may be informed by events of ResultReadyEventType that a new result is available. Then, the client might fetch this result using the information provided by events of ResultReadyEventType which is defined in Section 8.3.8.4.

Since the resultId is supposed to be system-wide unique, this method shall return only a single result. Since there may be additional result content to be retrieved by temporary file transfer, the server should keep result data available, resources permitting, until the client releases the handle ReleaseResult. However, the client cannot rely on the data to remain available until then.

Signature

GetResultById (
[in]   ResultIdDataType   resultId
[in]   Int32              timeout
[out]  Handle             resultHandle
[out]  ResultDataType     result
[out]  Int32              error);

 

Table 66 – GetResultById Method Arguments

Argument

Description

resultId

System-wide unique identifier for the result.

timeout

With this argument the client can give a hint to the server how long it will need access to the result data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

resultHandle

The server shall return to each client requesting result data a system-wide unique handle identifying the result set / client combination. This handle should be used by the client to indicate to the server that the result data is no longer needed, allowing the server to optimize its resource handling .

result

The result including metadata.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 67 – GetResultById Method AddressSpace Definition

Attribute

Value

BrowseName

GetResultById

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.10.2.2    GetResultComponentsById

This method is used to retrieve a result from the vision system. It is basically identical to the GetResultById method described in Section 7.10.2.1, but it returns the result not in a single output argument of ResultDataType but in individual output arguments corresponding to the elements of the ResultDataType structure.

The reason for providing this method is to facilitate the use of subtypes to the structures nested inside of ResultDataType. Since the NodeIds of structured DataTypes nested within a structured DataType are not transferred together with the DataType, subtyping these nested structures would then also necessitate subtyping ResultDataType. This is of course possible, but in the absence of such a subtype, the individual components can still be requested by this method.

Signature

GetResultComponentsById (
[in]   ResultIdDataType           resultId
[in]   Int32                      timeout
[out]  Boolean                    hasTransferableDataOnFile
[out]  Handle                     resultHandle
[out]  Boolean                    isPartial
[out]  Boolean                    isSimulated
[out]  ResultStateDataType        resultState
[out]  MeasIdDataType             measId
[out]  PartIdDataType             partId
[out]  RecipeIdExternalDataType   externalRecipeId
[out]  RecipeIdInternalDataType   internalRecipeId
[out]  ProductIdDataType          productId
[out]  ConfigurationIdDataType    externalConfigurationId
[out]  ConfigurationIdDataType    internalConfigurationId
[out]  JobIdDataType              jobId
[out]  UtcTime                    creationTime
[out]  ProcessingTimesDataType    processingTimes
[out]  BaseDataType[]             resultContent
[out]  Int32                      error);

 

Table 68 – GetResultComponentsById Method Arguments

Argument

Description

resultId

System-wide unique identifier for the result

timeout

With this argument the client can give a hint to the server how long it will need access to the result data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

hasTransferableDataOnFile

Indicates that TemporaryFileTransfer needs to be used to retrieve all data of the result content.

resultHandle

The server shall return to each client requesting result data a system-wide unique handle identifying the result set / client combination. This handle should be used by the client to indicate to the server that the result data is no longer needed, allowing the server to optimize its resource handling.

isPartial

Indicates whether the result is the partial result of a total result.

isSimulated

Indicates whether the system was in simulation mode when the job generating this result was created.

resultState

ResultState provides information about the current state of a result and the ResultStateDataType is defined in Section 12.19.

measId

This identifier is given by the client when starting a single or continuous execution and transmitted to the vision system. It is used to identify the respective result data generated for this job. Although the system-wide unique JobId would be sufficient to identify the job which the result belongs to, this makes for easier filtering on the part of the client without keeping track of JobIds.

partId

A PartId is given by the client when starting the job; although the system-wide unique JobId would be sufficient to identify the job which the result belongs to, this makes for easier filtering on the part of the client without keeping track of JobIds.

externalRecipeId

External identifier of the recipe in use which produced the result. This is only managed by the environment.

internalRecipeId

Internal identifier of the recipe in use which produced the result. This identifier is system-wide unique and it is assigned by the vision system.

productId

Identifier of the product in use which produced the result. This is only managed by the environment.

externalConfigurationId

External identifier of the configuration in use while the result was produced.

InternalConfigurationId

Internal identifier of the configuration in use while the result was produced. This identifier is system-wide unique and it is assigned by the vision system.

jobId

The identifier of the job, created by the transition from state Ready to state SingleExecution or to state ContinuousExecution which produced the result.

creationTime

CreationTime indicates the time when the result was created.

processingTimes

Collection of different processing times that were needed to create the result.

resultContent

Abstract data type to be subtyped from to hold the actual result content created by the job.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 69 – GetResultComponentsById Method AddressSpace Definition

Attribute

Value

BrowseName

GetResultById

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.10.2.3    GetResultListFiltered

This method is used to get a list of results matching certain filter criteria.

Signature

GetResultListFiltered (
[in]   ResultStateDataType        resultState
[in]   MeasIdDataType             measId
[in]   PartIdDataType             partId
[in]   RecipeIdExternalDataType   externalRecipeId
[in]   RecipeIdInternalDataType   internalRecipeId
[in]   ConfigurationIdDataType    externalConfigurationId
[in]   ConfigurationIdDataType    internalConfigurationId
[in]   ProductIdDataType          productId
[in]   JobIdDataType              jobId
[in]   UInt32                     maxResults
[in]   UInt32                     startIndex
[in]   Int32                      timeout
[out]  Boolean                    isComplete
[out]  UInt32                     resultCount
[out]  Handle                     resultHandle
[out]  ResultDataType[]           resultList
[out]  Int32                      error);

 

Table 70 – GetResultListFiltered Method Arguments

Argument

Description

resultState

If not 0, only results having the specified state are returned.

measId

If not empty, only results corresponding to the given measId are returned

partId

If not empty, only results corresponding to the given partId are returned.

externalRecipeId

If not empty, only results corresponding to the given externalRecipeId are returned.

internalRecipeId

If not empty, only results corresponding to the given internalRecipeId are returned.

externalConfigurationId

If not empty, only results corresponding to the given externalConfigurationId are returned.

internalConfigurationId

If not empty, only results corresponding to the given internalConfigurationId are returned.

productId

If not empty, only results corresponding to the given productId are returned.

jobId

If not empty, only results corresponding to the given jobId are returned.

maxResults

Maximum number of results to return in one call; by passing 0, the client indicates that it does not put a limit on the number of results.

startIndex

Shall be 0 on the first call, multiples of maxResults on subsequent calls to retrieve portions of the entire list, if necessary.

timeout

With this argument the client can give a hint to the server how long it will need access to the result data.

A value > 0 indicates an estimated maximum time for processing the data in milliseconds.

A value = 0 indicates that the client will not need anything besides the data returned by the method call.

A value < 0 indicates that the client cannot give an estimate.

The client cannot rely on the data being available during the indicated time period. The argument is merely a hint allowing the server to optimize its resource management.

isComplete

Indicates whether there are more results in the entire list than retrieved according to startIndex and resultCount.

resultCount

Gives the number of valid results in ResultList.

resultHandle

The server shall return to each client requesting result data a system-wide unique handle identifying the result set / client combination. This handle has to be used by the client to release the result set.

resultList

List of results matching the Filter.

error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 71 – GetResultListFiltered Method AddressSpace Definition

Attribute

Value

BrowseName

GetResultListFiltered

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

There are the following cases with the respect to the number of results:

              The number of results to be returned according to the filter is less or equal to MaxResults; the first call, with startIndex=0, returns isComplete=TRUE, so the client knows that no further calls are necessary. resultCount gives the number of valid elements in the resultList array.

              The number of results to be returned is larger than maxResults; the first N calls (N > 0 with N ≤ (number of results) divisor MaxResults), with startIndex=(N-1)*maxResults, return isComplete=FALSE, so the client knows that further calls are necessary. The following call returns isComplete=TRUE, so the client knows, no further calls are necessary. resultCount gives the number of valid elements in the resultList array (on each call, so on the first N calls, this should be maxResults).

 

7.10.2.4    ReleaseResultHandle

This method is used to inform the server that the client has finished processing a given result set allowing the server to free resources managing this result set.

The server should keep the data of the result set available for the client until the ReleaseResultHandle method is called or until a timeout given by the client has expired. However, the server is free to release the data at any time, depending on its internal resource management, so the client cannot rely on the data being available. ReleaseResultHandle is merely a hint allowing the server to optimize its internal resource management. For timeout usage see the description in Section 7.10.2.1.

 

Signature

ReleaseResultHandle (
[in]   Handle      resultHandle
[out]  Int32       error);

 

Table 72 – ReleaseResultHandle Method Arguments

Argument

Description

resultHandle

Handle returned by GetResultById or GetResultList, identifying the result set/client combination.

Error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 73 – ReleaseResultHandle Method AddressSpace Definition

Attribute

Value

BrowseName

ReleaseResultHandle

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

7.11    ResultFolderType

This ObjectType is a subtype of FolderType and is used to organize available results of the vision system which the server decides to expose in the Address Space. It may contain no result if no result is available, if the server does not expose results in the Address Space at all or if no available result matches the criteria of the server for exposure in the Address Space. Figure 18 shows the hierarchical structure and details of the composition. It is formally defined in Table Table 74.

The ResultFolderType contains all results of the vision system, which are available and should be exposed in the Address Space. It may contain no result if no result is available or multiple if multiple results are available.

Figure 18 – Overview ResultFolderType

 

 

Table 74 – Definition of ResultFolderType

Attribute

Value

BrowseName

ResultFolderType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the FolderType defined in OPC 10000-5

HasComponent

Variable

<ResultVariable>

ResultDataType

ResultType

OptionalPlaceholder

 

The ResultType used in the ResultFolderType is defined in Section 9.1.

 

7.12    ResultTransferType

7.12.1    Overview

This ObjectType is a subtype of the TemporaryFileTransferType defined in OPC 10000-5 and is used to transfer result data as a file.

The ResultTransferType overwrites the Method GenerateFileForRead to specify the concrete type of the generateOptions Parameter of the Methods. It does not specialize the GenerateFileForWrite method of the base type as results are supposed to be only generated by the vision system, not received.

Figure 19 shows the hierarchical structure and details of the composition. It is formally defined in Table 75.

 

Figure 19 – Overview ResultTransferType

 

Table 75 – Definition of ResultTransferType

Attribute

Value

BrowseName

ResultTransferType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the TemporaryFileTransferType defined in OPC 10000-5

HasComponent

Method

0:GenerateFileForRead

--

--

Mandatory

 

7.12.2    ResultTransferType methods

7.12.2.1    GenerateFileForRead

This method is used to start the read file transaction. A successful call of this method creates a temporary FileType Object with the file content and returns the NodeId of this Object and the file handle to access the Object.

Signature

GenerateFileForRead (
[in]   ResultTransferOptions    generateOptions
[out]  NodeId                   fileNodeId
[out]  UInt32                   fileHandle
[out]  NodeId                   completionStateMachine);

 

Table 76 – GenerateFileForRead Method Arguments

Argument

Description

generateOptions

The structure used to define the generate options for the file.

fileNodeId

NodeId of the temporary file

fileHandle

The FileHandle of the opened TransferFile.

The FileHandle can be used to access the TransferFile methods Read and Close.

completionStateMachine

If the creation of the file is completed asynchronously, the parameter returns the NodeId of the corresponding FileTransferStateMachineType Object.

If the creation of the file is already completed, the parameter is null.

If a FileTransferStateMachineType object NodeId is returned, the Read Method of the file fails until the TransferState changed to ReadTransfer.

 

Table 77 – GenerateFileForRead Method AddressSpace Definition

Attribute

Value

BrowseName

GenerateFileForRead

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 

 

7.13    SafetyStateManagementType

7.13.1    Overview

This ObjectType provides a method to inform the vision system about the changes of an external safety state. The vision system itself gives feedback about the action which is taken to react on this state change. Figure 20 shows the hierarchical structure and details of the composition. It is formally defined in Table 78.

 

Figure 20 – Overview SafetyStateManagementType

 

Table 78 – Definition of SafetyStateManagementType

Attribute

Value

BrowseName

SafetyStateManagementType

IsAbstract

False

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the BaseObjectType defined in OPC 10000-5

HasComponent

Method

ReportSafetyState

--

--

Mandatory

HasComponent

Variable 

VisionSafetyTriggered

Boolean

BaseDataVariableType

Mandatory

HasComponent

Variable 

VisionSafetyInformation

String

BaseDataVariableType

Mandatory

 

VisionSafetyTriggered

Information about the current internal safety state.

VisionSafetyInformation

Textual information that can be provided by the vision system – e.g. “open safety door”.

 

7.13.2    SafetyStateManagementType methods

7.13.2.1    ReportSafetyState

This method is used to provide information about the change of an external safety state. For example, safety doors which are not under supervision of a vision system are open and as a consequence it is not possible to switch on a laser source inside a vision system.

Important note: This is not to be used as a safety feature. It is only for information purposes.

Signature

ReportSafetyState (
[in]   Boolean     safetyTriggered
[in]   String      safetyInformation
[out]  Int32       error);

 

Table 79 – ReportSafetyState Method Arguments

Argument

Description

safetyTriggered

Information about the current external safety state.

safetyInformation

Information that can be provided to the vision system – e.g. opening safety door.

Error

0 – OK

Values > 0 are reserved for errors defined by this and future standards.

Values < 0 shall be used for application-specific errors.

 

Table 80 – ReportSafetyState Method AddressSpace Definition

Attribute

Value

BrowseName

ReportSafetyState

References

NodeClass

BrowseName

DataType

TypeDefinition

ModellingRule

HasProperty

Variable

InputArguments

Argument[]

PropertyType

Mandatory

HasProperty

Variable

OutputArguments

Argument[]

PropertyType

Mandatory

 


 

8       ObjectTypes for Vision System State Handling

8.1      State Machine overview

8.1.1      Introduction

All state machine types defined in this specification are mandatory unless explicitly stated otherwise. However, some states may be implemented as transient (do-nothing) states depending on the characteristics of the vision system.

To improve clarity and re-usability, this specification makes use of hierarchical state machines. This means that states of a state machine may have underlying SubStateMachines. The instantiation of a SubStateMachine for a particular state of the main state machine may be specified as optional.

8.1.2      Hierarchical state machines

8.1.2.1      Entering a SubStateMachine

OPC 10000-5, Annex B.4.9 defines several ways of entering a SubStateMachine:

1.     If the SubStateMachine has an initial state (i.e., a state of type InitialStateType) this state is entered whenever the parent state is entered.
We make use of this principle for the VisionStepModelStateMachine defined in Section 8.4.

2.     A SubStateMachine can also be entered by a direct transition from the parent state machine into one of its states, bypassing the initial state. In this case, the parent state machine goes automatically into the parent state of the SubStateMachine.
We make use of this principle for operation mode state machines like the VisionAutomaticModeStateMachine defined in Section 8.3.

3.     If a SubStateMachine has no initial state and the parent state is entered directly, the state of the SubStateMachine is server-specific.
We make use of this principle for the error handling described in Section 8.2.2.4.

8.1.2.2      Leaving a SubStateMachine

The SubStateMachine types used here do not have transitions into specific states of the parent state machine so that they are not bound to a specific state machine, but can be used within states of any state machine. Therefore, the SubStateMachines are not left explicitly. Instead, the parent state machine may leave the parent state of the SubStateMachine in which case the SubStateMachine ceases to be active and will enter a Bad_StateNotActive state. In that case, the system actually transitions from a state of the SubStateMachine into an unrelated state of the main state machine, but this transition will not be explicitly shown or specified on the level of the SubStateMachine.

We make use of that principle especially for the error handling described in Section 8.2.2.4.

At present, this specification describes one such mode, the “Automatic” mode. All pertaining states are contained in a SubStateMachine of type VisionAutomaticModeStateMachine defined in Section VisionAutomaticModeStateMachineType. The reason for this naming is that this state machine is derived – but not restricted to – the typical application of a vision system in automatic operation on a production line.State machine type hierarchy

The following diagram shows the hierarchy of mandatory state machine types in this specification. All state machine types are derived from the FiniteStateMachineType, implying that all their states and transitions are pre-defined and cannot be changed or added to by sub-typing so that a client can detect all states and transitions and rely on these and only these states and transitions to exist on sub-types of the state machine type

Figure 21 – Vision system state machine type hierarchy

 

8.1.3      Automatic and triggered transitions and events

In the state machines specified here, most transitions can be caused by method calls and all transitions can be caused by internal decisions of the vision system. We call these “automatic” transitions.

In the state diagrams describing the state machines in the following sections, all transitions are shown individually. Transitions caused by methods are shown in black with the method name as the UML transition trigger. “Automatic” transitions are shown in orange without a trigger.

 

Upon entry into a new state, a StateChanged Event will be triggered indicating the transition.

Some transitions may trigger extra events. These events are shown in the state diagrams on the transition as a UML effect, preceded by a “/”.

Some transitions may depend on conditions. Where they are semantically important, they have been put into the state diagrams as UML guards in “[]”.

8.1.4      Preventing transitions

The server can prevent transitions from being carried out, e.g. due to the internal state of the vision system. A typical example would be to prevent leaving the parent state of a VisionStepModelStateMachine in order to avoid interrupting synchronization with an external system.

As automatic transitions are always done for internal reasons, the server will obviously simply not execute the transition in this case.

For method-triggered transitions, the server should set the Executable flag of the method in question to False to signal to clients that this method should currently not be called. If the method is called anyway, regardless of the Executable flag, the method shall fail with an appropriate error code.

8.2      VisionStateMachineType

8.2.1      Introduction

This ObjectType is a subtype of FiniteStateMachineType and represents the top-level behavior of the vision system. It is formally defined in Table 81.

The Operational state has a mandatory SubStateMachine for the “Automatic” mode of operation and may have additional SubStateMachines for other modes of operation.

The other state may have optional SubStateMachines of the VisionStepModelStateMachineType.

For clarity, transitions into states of SubStateMachines are not shown in the diagram.

 

Figure 22 – States and transitions of the VisionStateMachineType

 

8.2.2      Operation of the VisionStateMachineType

8.2.2.1      Basic operation

After power-up the system goes into a Preoperational state. It is assumed that the vision system loads a configuration which is present on the system and marked as active. From there, it can be put into Operational state either automatically, due to internal initialization processes or by a SelectMode method call. The VisionStateMachineType provides one mandatory method, SelectModeAutomatic for transition into the “Automatic” mode SubStateMachine described in Section 8.3. Subtypes of VisionStateMachineType may offer additional SubStateMachines and thus additional SelectMode method calls.

The system stays in Operational mode, doing its job, until it is either resetted, halted or an error occurs which suspends normal operation until resolved.

8.2.2.2      Resetting the system

At any time, it may be necessary or desired to revert the vision system into its initial state after power-up, i.e., state Preoperational.

The vision system may decide this due to internal conditions, or by calling the Reset method on the VisionStateMachine in the OPC UA server.

The Reset method shall always be executable. If for some reason the vision system is not capable of carrying out this transition, the behavior is undefined. The underlying assumption is that, if the vision system cannot perform a reset, it cannot be assumed that it is capable of carrying out any other controlled transition, including a transition into the Error state.

Application Note:

There are basically two reasons for a reset in the state model of this specification.

Either the vision system is idle – reflected by, e.g. the Ready or Initialized state – and the intent of calling the Reset method is to return to Preoperational state in order to call a different SelectMode method to change the mode of operation. In that case, carrying out the transition should not be a problem.

The other situation is as an emergency measure because the vision system does no longer operate correctly. In that case, the method may fail with an internal error code like Bad_UnexpectedError or Bad_InvalidState, but since the vision system is in an incorrect internal state, it is uncertain that it can reach any other state, like Error.

The client can assume that Preoperational or Error states should be reached within a reasonable – application-specific – time-frame. If that is not the case, the client can conclude that intervention is necessary and issue an appropriate message and operator call.

8.2.2.3      Halting the system

A vision system will frequently make use of a number of resources, like camera drivers, files, databases etc. which will need to be properly closed before shutting down that system.

In Halted state, the system shall have put all resources into a state where it is safe to power down the system. However, not all operation is stopped, because the system can be brought out of Halted state by a call to the Reset method, transitioning to the Preoperational state.

The vision system may decide to enter Halted state due to internal conditions or by calling the Halt method on the OPC UA server.

The Halt method shall always be executable. If for some reason the system is not capable of carrying out the transition, the VisionStateMachine shall transition into the Error state.

8.2.2.4      Error handling

In every state of the VisionStateMachine or any of its SubStateMachines, an error may occur.

The system may also decide to enter state Error by means of an automatic operation if it cannot – or should not – continue its normal operation in the presence of an error. Note that the presence of an error condition does not necessarily cause the system to enter the Error state, it may be capable to continue normal operation in the presence of a signaled error condition. However, it the system does enter the Error state, it is mandatory that it indicates this by activating an error condition.

An error shall be signaled by an appropriate error condition. An arbitrary number of error conditions can be active at any time.

Error conditions are exposed as subtypes of the ConditionType defined in OPC 10000-9 and may have Acknowledge and Confirm methods and the appropriate state handling. It is expected that the calling of these methods has an effect upon the underlying system and/or that the underlying vision system monitors the state of the conditions and uses these in its internal decision making process whether to stay in the Error state or in which state to transition next.

It is assumed that the Acknowledge method will typically be called by a client automatically, indicated that the message has at least been received. The Confirm method will typically be caused by a human interaction, confirming that the cause of the error is remedied.

For convenience, the VisionStateMachine may offer the ConfirmAll method which shall confirm all conditions currently active. The effect on the internal decision making shall be the same as when the methods would have been called individually from the outside.

Thus, in Error state, the system decides either on its own or based on external input, like an acknowledgement or confirmation of the error, and other (non OPC UA) input signals, which state to transition to next.

Upon entering the Error state, an event of type ErrorEventType is triggered, upon leaving to some other state, an ErrorResolved event is triggered, if the error is actually resolved, in addition to the mandatory StateChanged events. Thus, a control system may listen only to the Error and ErrorResolved events monitoring the vision system.

The Error state can be left in the following ways:

·         By a call to the Halt or Reset method, transitioning to states Halted and Preoperational respectively. The condition(s) causing the Error state do not necessarily have to be resolved for this transition. Only if they are resolved, an ErrorResolvedEvent shall be triggered.

·         By an internal decision of the system to transition into the Halted or Preoperational states. As these states do not constitute normal productive operation of the system, the condition(s) causing the Error state do not necessarily have to be resolved for this transition. Only if they are resolved, an ErrorResolved event shall be triggered. Subsequent action, e.g. a call to ActivateConfiguration in these states may lead to the error being resolved and the system being capable of resuming normal productive operation.

·         By an internal decision of the system to transition into the Operational state. As this state constitutes normal productive operation, this transition is only allowed if the condition(s) leading to the Error state are actually resolved. Therefore, an ErrorResolved event shall be triggered in this case.

In the last case, the automatic transition into the Operational state due to the resolving of the error condition(s), the system will actually go into one of the states of the SubStateMachines of the Operational state, using the 3rd method for entering a SubStateMachine described in Section 8.1.2.1 by a server-specific decision about the state.

Thus, the vision system can decide, based on its internal conditions, in which actual state it will continue operation. It may decide that it can immediately continue a job interrupted by the error and return to the SingleExecution or ContinuousExecution states; it may decide that it can immediately take on the next job and return to the Ready state; it may decide that it needs a re-initialization of a recipe and return to the Initialized state. It may even decide that it can resume a synchronization with an external system and return to any state within a VisionStepModelStateMachine inside one of these states.


 

8.2.3      VisionStateMachineType Overview

 

Figure 23 – Overview VisionStateMachineType

8.2.4      Modes of operation

One underlying idea of this specification is that a vision system may have different modes of operation with very different sets of states, methods and transitions.

Due to the high degree of individuality of vision system requirements and solutions, it does not appear possible to standardize each and every mode of operation of such systems. Therefore, vision systems according to this specification are free to implement additional state machines for such use cases.

However, this specification considers some states as universal for machine vision systems according to this specification and thus outside of these modes of operation, namely states related to powering up and shutting down the system and to error handling.

The system will in any case need to power up and enter some state automatically without outside intervention. From this state, a selection – either by method call or automatically based on internal and external circumstances – of the actual mode of operation can be made.

Conversely, the same holds for shutting down the system. Independent from the mode of operation, the system will need a way to enter a state from which it can safely be powered down.

And finally it will be of great advantage to all clients if the handling of errors is identical in all modes of operation.

Therefore, these states are mandatory in this specification as well as one state encompassing the actual operation of the system. Modes of operation shall be specified as SubStateMachines to this operational state.

8.2.5      VisionStateMachineType Definition

VisionStateMachineType is formally defined in Table 81.

Table 81 – VisionStateMachineType Address Space Definition

Attribute

Value

 

Includes all attributes specified for the FiniteStateMachineType

BrowseName

VisionStateMachineType

IsAbstract

False

References

Node
Class

BrowseName

DataType

TypeDefinition

ModellingRule

Subtype of the FiniteStateMachineType defined in OPC 10000-5 Annex B.4.5

HasComponent

Object

Preoperational

--

StateType

HasComponent

Object

Halted

--

StateType

 

HasComponent

Object

Error

--

StateType

 

HasComponent

Object

Operational

--

StateType

 

HasComponent

Object

PreoperationalToHalted

--

TransitionType

 

HasComponent

Object

PreoperationalToHaltedAuto