Embedded Vision everywhere?

„Customers deserve the same power in terms of software in an embedded system than they can enjoy in a PC system.“ – Dr. Olaf Munkelt, MVTec Software GmbH (Bild: MVTec Software GmbH)

Is embedded vision really useful for every machine vision application?

Bake: I think the transition will not be so quick on the factory side. We are looking more at the enabler side, because the new markets, like retail, can not put a PC in those applications. So the cost point becomes the enabler. These new markets will develop the market structure from the supplier point of view for embedded vision technology.

Which new interfaces will be coming for machine vision?

Behrens: We see a clear trend in ideas born in Industrie 4.0 protocols for the slow? loop. I like to decide between a fast loop, where you typically have to process your data and this processed data goes to a computer or whatever. On the other hand you use the data in complete other ways. Therefore new protocols are born, like MQTT, OPC UA and others. You have to dig into a complete other world with big data and cloud services.

Munkelt: The PC itself is some kind of standard, defined by the CPU, the operating system etc. We don’t have this environment at this point in the embedded vision world. So we still see a diverse set of different CPUs. Today we have ARM, ten years ago everybody spoke about TI. On the operating side we talk about Linux, but there are also commercial versions like VxWorks out there. If we simply take an embedded device, we have to admit that the complexity of this device is higher than it is on the PC side. More people are comfortable working with a PC-based solution than there are with an embedded vision solution. There are several discussions to standardize the interface between the sensor and the CPU side. There are efforts that OPC UA will get a companion specification to make it easier to work on the system side. We should look into this matter to avoid different configurations and then the customer is simply ‚wow‘. We need some kind of streamline process to make it easier to apply.

Bake: If you look at the component market on the factory side today, there was a lot of effort on the standardization between the camera and the processor. We need something similar between board-level products, e.g. board-level to chip-level or between sensor-modules and processor-modules. The other side is a more standardized way between the application and the vision system. How do you connect your system to the central data basis?

York: I think there is an area: the software development space. Companies can be enabled to develop on a desktop and then transport it down to a much lower cost embedded system. That would really open up the market, because it enables you to develop seamlessly in a desktop world but then deploy it into the mass market with much lower cost embedded systems.

Strampe: We see more and more manufacture of industrial processors for the industrial market combine ARM cores together with different special interfaces like a dual core plus ARM core for the computing power and an additional risk processor which has to manage a real time Ethernet fieldbus. For example you can integrate your vision system into an Ethercat environment and everything is integrated on one chip running in the Linux OS.

van der Aa: If you look into the future you will probably see a two-step solution: x86 plus FPGA. Probably in the future they will merge the technologies all-in-one, so we will have more powerful and cost-effective chip solutions.

FPGAs are powerful but also complex. Who can use them?

Munkelt: FPGA programming is a mess. If you have 100 people who can program a PC-based system running windows you may have ten people who are comfortable in a Linux environment and you have probably one person, who is comfortable in a FPGA environment. For that purpose we have companies like Silicon Software, which reduce this complexity, because they allow you to use a FPGA. But we see different hardware architectures right now and we need them by the way. The machine vision community has never been able to develop a perfect CPU by itself and I can imagine some architectures, where this would really be of great benefit. We actually need your invention on the CPU and on the FPGA side to provide our own algorithms and to add benefit to the users. In certain applications we need a FPGA, in other applications it is okay to work in an ARM-like architecture.

Strampe: In machine vision and in our hardware we are using a lot of FPGAs but mainly for communication functionality and for functionalities that are fix for a longer time. If it comes to image processing, people like to use C++ codes or libraries and then it becomes more complex and difficult to catch an image processing algorithm and to put that into the FPGA. As we are familiar with Altera FPGAs, we have a look on the combination of Intel and Altera and what will happen there. Meanwhile Altera is more of an Intel company. On the side of Altera we have also SoCs like Xilinx, the combination of multi core ARM processors and FPGA, and Intel says they will use ARM in the future.

Bake: We need to differentiate between factory and embedded vision applications. In a factory application a typical customer is the vision system engineer. He takes the components, plugs them together and has some colleagues engaging in the software and writing the code. These people normally don’t have FPGA experience. There you have a problem if you have a FPGA programming task attempt. If you go to an embedded vision application it is different, because you are dealing with electronic boards, and if there is customization in the electronic boards you talk about electronic integration. People programming ARM devices, connecting memories and also FPGAs are very common in those companies. It is not so common like PC processing, but I think from a technology point it is OK. And if you talk to OEMs on the embedded vision side we find many people able to cope with FPGAs.

York: We are very confident that both Altera and Xilinx, remain faithful to ARM. But I see FPGAs have been wonderful tools for dealing with the connectivity mess you often get in an embedded system. A SoC/FPGA has got large processor capabilities, it looks just like a conventional processor, but you’ve got all that flexibility to put in different I/Os or connectivity, depending on exactly how you want to adapt your product for different markets or interfaces. That is the power of these FPGAs – largely hidden from the end user, but a real boon to system developers that want to cope with the multitude of different markets and different connectivity options but want to have only one physical product.

Seiten: 1 2 3 4Auf einer Seite lesen

Themen:

| Fachartikel

Ausgabe:

inVISION 1 2017
TeDo Verlag GmbH

Das könnte Sie auch Interessieren