1 Introduction

Informatization and automation technology can rise up the competitiveness of a company, especially, for the large manufacturing enterprises. Using intelligent manufacturing management method can not only improve the quality of products, but also be effective and efficient in organizing production, reducing product costs, strengthening market shares. By integrating informatization and automation technology, the production process is increasingly moving toward the intelligent manufacturing [1]. The steel industry is the typical capital-intensive and large manufacturing industry, and the production process in steelworks usually consists of mining, beneficiation, coking, ironmaking, steelmaking, rolling, etc. Therefore, it is of great value for steelworks to enhance the comprehensive capacity by intelligent manufacturing.

With the development of computers and artificial intelligence (AI), the application of AI in the steelworks had been concerned as far back as 30 years. During this period, AI was mainly used in the equipment fault diagnosis, production planning, expert systems, etc, and gradually from theoretical research into practical applications [2, 3]. In the past decade, the steel enterprises paid more attention to the informatization and automation, and many large-scale steelworks applied enterprise resource planning (ERP), manufacturing execution system (MES), and other information management and control system [4]. However, the effect of information and automation management system, to much more extent, depends on the accuracy and timeliness of fundamental data acquired online, in which coding and identifying the marked information of materials are crucial to achieve information and automation management [5]. In order to facilitate tracking and identifying the marked information, the label characters are generally printed on the surface of the steel materials. Because of the long production cycle and many material delivery processes, it is necessary to identify marked information in each process or different production stages for inventory management and logistics. However, it is not easy to manually identify and mark the label characters on the hot steel products in high temperature environment. Therefore, the non-contact measurement method in tracing and detecting the label characters of steel material is of great value. Among the various non-contact methods of detecting and tracking, the machine vision is more suitable for long-distance detection and tracking in steelworks. Hence many steelworks have shown great interests in the application of machine vision to solve this kind of problem [68].

The application of machine vision in steel enterprises generally includes the following two aspects. (i) Surface defect detection to achieve quality control. For example, in order to prevent the unqualified product from entering the next manufacturing process. Periodical surface defect detection methods [9] and the strip steel surface scratches detection methods [10, 11], etc, have been proposed. (ii) Marking characters and recognition to track the manufacturing of steel products. Some recognition methods that combine image segmentation with KLT, SVM, ANN or optical flow [1215] have been proposed. However, most of the current methods focus on the information detection or recognition of natural temperature steel materials, and there are still no solutions to detect and recognize the marked information of high temperature steel materials. In practice, many hot manufacturing processes (e.g., continuous casting, hot rolling) in steelworks are needed to automatically identify the steel material information in online states. Additionally, the weak contrast of hot steel surface and the marked information makes it difficult to identify the marked characters. Therefore, there are more challenges and values in identifying the marking information of hot steel materials.

The main object of this paper is to study the acquisition, recognition and management method of the marked information on steel products, and we will pay more attention to the label characters capturing and identifying process on hot steel materials with machine vision. The rest of the paper is arranged as follows. In Sect. 2, the framework of label characters marking and management is put forward. In Sect. 3, the machine vision recognition methods of steel material marked information are illustrated. The experimental research and analysis are presented in Sect. 4, and this work is summarized in Sect. 5.

2 Framework of label characters marking and management

Due to the long production cycle and complex logistics transportation, the management information system of steel enterprise is often constructed by multi-layers. Figure 1 presents a steel material label information marking, identifying and management framework in intelligent manufacturing system. It is mainly composed of the customer relationship management (CRM)/ERP system, MES, warehouse management system (WMS) and supply chain management (SCM)/logistics management system (LMS), etc.

Fig. 1
figure 1

Management framework of label characters in intelligent manufacturing system

According to customer orders from CRM or the main production plan, the manufacturing tasks will be assigned by the ERP system through manufacturing forms, and the property of a manufacturing product will be defined by the bill of materiel (BOM) and manufacturing process. Therefore, the marking information of a steel product is defined in ERP system and will be stored in a shared database. In the production process, MES system obtains information or manufacturing commands from the ERP system, and the label characters of a product will be marked by the marking machine. In practice, there are often three kinds of situations for marking the characters on steel materials. The first is marking the characters on billets in steelmaking factory, and the marking operation is instantly completed after the billets are sheared. The second is marking the characters on a semi-product which is manufactured by continuous casting and continuous rolling plant, and the marking operation usually can be done online. The third is marking characters on final steel products in rolling plants, and the marking operation will be executed after the product has passed the quality check. Finally, the marked information will be checked before the storage and then the checked information will be stored in the WMS. In addition, the marked information also need to be checked or identified in stock out process or in some manufacturing processes, e.g., the marked information identifying for reheat materials in hot rolling. By this way, every material can be tracked in manufacturing process with the marked information. After the product is sold, or in logistics delivery process, the tracing or tracking operation can be achieved in the SCM with the recorded and marked label information.

The methods of label marking on the surface of steel products usually include templates marking, hand writing, printing by marking machine [12, 16]. Owing to high temperature and inaccessibility to the environment in steelworks, the automatically marking and identifying methods are welcomed. However, the contrast of label information and steel background is very weak, especially for the hot steel materials. In order to decrease the difficulty in recognition, we propose to enhance the contrast effect between marked characters and the steel surface using an image capturing system, as shown in Fig. 2. A kind of lighting is used to illuminate the area of marked characters, and a filter which permits certain wavelength light to pass, is mounted before the camera lens. The contrast of the marked characters and background can be increased with the match of lighting and filters. The captured images are pre-processed and segmented as recognition sample images, and then the marked characters can be identified by the recognition algorithm. Following this process, the identified characters will be compared or stored in the corresponding EPR/MES/WMS system.

Fig. 2
figure 2

Process of label character marking and machine vision recognition

3 Method of marked characters recognition

The label characters often marked on the rough interface of steel materials, and the captured images need to be further processed before segmentation and recognition. The character recognition algorithm proposed in this paper mainly consists of three parts, namely, image pre-processing, image segmentation, and characters recognition, as shown in Fig. 3.

Fig. 3
figure 3

Process of marked characters recognition

3.1 Image pre-processing

The proposed image pre-processing method mainly includes image contrast enhancement, image binarization, and de-noising operation. After images are enhanced, the binarization processing and median filter operation will be used to perform the image segmentation. Among the pre-processes, image contrast enhancement is the key operation, and the grey histogram and grey linear transformation enhancement are the common methods [17]. However, the grey value in character marking area non-linearly changes, which is caused by the uneven cooling and rough surface of steel materials. Thus we construct a non-linear operation model with top-hat transformation and gamma transform. The constructed top-hat transformation is presented as

$$ h = f - (f \circ s), $$
(1)

where f represents the image grey level, and f ∘ s means the open operation to the image with the structure element s.

The expression of gamma transformation is shown as

$$ y = (x + E)^{\gamma } , $$
(2)

where E represents the compensation coefficient, and γ means gamma coefficient. The variation range of x and y is [0, 1].

The top-hat transformation with appropriate structure elements can effectively eliminate the influence of uneven background. Figure 4 presents the process of top-hat transformation operation. Moreover, gamma transformation can selectively enhance the contrast of low grey region or high grey region based on γ. The gamma transformation is shown in Fig. 5.

Fig. 4
figure 4

Process of top-hat transformation operation

Fig. 5
figure 5

Gamma transformation of grey level

  1. (i)

    γ > 1, contrast of high grey region is enhanced.

  2. (ii)

    γ < 1, contrast of low grey region is enhanced.

  3. (iii)

    γ = 1, original image remains unchanged.

3.2 Image segmentation

The recognition method can only identify a single character each time. Therefore characters in an image need to be segmented. We use vertical projection method to find the boundaries between characters to achieve the segmentation. After binarization and de-noising operation, the integral has been done along vertical direction in the image. The results are shown as following:

$$ f_{k} = \sum\limits_{i = 0}^{n - 1} {\varphi (i,j_{k} )} , $$
(3)

where f k is the number of features in column j k , and φ(ij) is the binaryzation image.

By comparing the integral results, the minimum value of local area \( V_{{\text{m}in}} \) can be determined, and y k is one side of the character boundaries.

$$ V_{ \hbox{min} } = { \hbox{min} } \left[ {\sum_{i = 0}^{n - 1} \varphi \left( {i, j_{k} } \right)} \right],\quad y_{k} = j_{k} |f_{k} = V_{{\text{m}in}} , $$
(4)

where j k  = 0, 1, ···, m − 1, and m, n are respectively the width and height of the image. By this way, the boundaries of each image can be determined by y k , and the image can be segmented into sampling image characters based on y k . Figure 6 presents the segmentation method.

Fig. 6
figure 6

Images segmentation with projection method

The segmented character images are different in sizes. In order to be identified with the recognition algorithm, it is necessary to make them with the same format [18]. The operation is completed with image zooming and normalization transform in this paper.

3.3 Recognition

The character recognition methods based on different classifiers have been proposed in some literatures, and mainly include template matching method [19], according to the structural features of characters to recognize [20, 21], based on support vector machine [22, 23] and neural network character recognition [24, 25], etc. Among these methods, back propagation (BP) neural network is advantageous in nonlinear data processing, fault tolerance and autonomous learning, and it is suitable for steel label characters recognition. Consequently, a three layers BP neural network algorithm is provided in this paper, and there are input layer, hidden layer, and output layer in the BP neural network, as shown in Fig. 7.

Fig. 7
figure 7

BP neural network topology

x i is defined as the input sampling information of one neuron in input layer, and the number of the input samples is n. The number of neurons in hidden layer and output layer are respectively s and m, and the corresponding transfer functions in hidden layer and output layer are respectively f 1 and f 2. The input vector and target vector are X and T respectively. The process of forward information transformation is as follows.

The output of the neuron j in hidden layer is

$$ y_{1j} = f_{1} \left( {\sum\limits_{i = 1}^{n} {w_{1ji} x_{i} } + b_{1j} } \right),\quad j = 1,2, \cdots ,s. $$
(5)

The output of the neuron k in output layer is

$$ y_{2k} = f_{2} \left( {\sum\limits_{j = 1}^{s} {w_{2kj} y_{1j} } + b_{2k} } \right),\quad k = 1,2, \cdots ,m. $$
(6)

The error function is

$$ e_{k} = \frac{1}{2}\sum\limits_{k = 1}^{m} {(t_{k} - y_{2k} )^{2} } . $$
(7)

The values of weight factors and BP errors (BPE) are calculated with gradient descent method in this paper. The adjusting process of BP neural network weight actually is the adjustment of the BPE. The BPE from the neuron k to neuron j is determined by the multiplying result between output layer error e k and the first-order derivative of f 2, hence the BPE is described as \( \delta_{kj} = e_{k} f_{ 2}^{\prime } \). Furthermore, the variation of weights Δw 2kj in hidden layer can be obtained by the BPE. In the same way, the variation of weights Δw 1ji in input layer can also be obtained. In addition, the transfer function is \( f(x) = \frac{1}{{1 - \text{e}^{ - x} }} \) in this paper. After the parameters are determined and some sample trainings are executed, the characters recognition can be achieved with the BP neural network classifier.

4 Experiment

4.1 Experimental environment

With the proposed methods, some experiments have been done in the BaoSteel Steelmaking Plant, and the experimental environment is shown in Fig. 8. The equipments mainly include a set of marking machine, one monochrome camera FLY-PGE-13S2M-CS, one colour camera SQ-S20C-H30, a set of metal halide lamp, a set of MIDOPT FS 100 band-pass filters, and a computer (with CPU 3.0, Memory 2 GB).The molten copper is printed on the billet section to form the label characters in the continuous casting process. The spectrum of the lighting is close to sunlight, and the lamp is waterproof, dustproof, and high temperature resistance. The MIDOPT FS 100 includes more than seven band pass filters, and the filters can block or pass the light with wavelengths from the ultraviolet to infrared.

Fig. 8
figure 8

Experimental environment

4.2 Image acquisition and segmentation

The quality of images captured in different temperatures with different filters and illuminations is compared. Some marked information images are captured with the monochrome camera and colour camera respectively, and the effects of the images captured with filters (e.g., BP324, BP550, etc.) and without filters are shown in Fig. 9. The temperature of the continuous casting billets is about 1,100 °C. Figure 10 partly presents the relationship between wavelength and transmission of band pass filters. In addition, to observe the influence of filters on image quality under lower temperature, some images are captured with the colour camera, multi filters, and without illumination of the lamp for the billets cooling to 550 °C. The comparison results are shown in Fig. 11.

Fig. 9
figure 9

Effects of billets images captured with filters and without filter at 1,100 °C

Fig. 10
figure 10

Curves of wavelength and transmission of band pass filters

Fig.11
figure 11

Images captured for billets at 550 °C with different filters

According to the effects of images captured in different situations (see Figs. 9 and 11), we can see that there are some noises in the images captured without filter, whether by the monochrome camera or colour camera. However, the contrast of some images has been enhanced by using certain filters (such as BP550, BP590). These filters effectively block some wavelength light and let the light with wavelength around marked characters fully pass. In this experiment, the marked information is copper yellow, and the passing wavelength of filter BP550 and BP590 mainly includes this bandwidth light (see Fig. 10). Therefore, the images captured by cameras with filter BP550 and BP590 are more clear than others. Additionally, the quality of images also can be improved by the proposed pre-processing method. To compare the effects of segmentation for images captured in different situations, some segmentation experiments have been done. Table 1 presents part of segmentation results for original images and enhanced images. Figure 12 presents the segmentation accuracy of images captured in different situations, segmented with or without pre-processing cases.

Table 1 Segmentation comparisons for original images and enhanced images
Fig.12
figure 12

Segmentation accuracy of images with different capturing and processing situations

In term of the results in Table 1 and Fig. 12, the segmentation accuracy of images captured by cameras with BP550 and BP590 filters is very high, and the filters have blocked some distracts from noises. The segmentation operation is more easy and effective. However, as for the images captured by cameras with other filters, the segmentation accuracy has also been improved after the pre-processing operation (e.g., enhancement, de-noising, etc.), but the total segmentation accuracy is still very low.

4.3 Recognition test

According to the BP neural network recognition method proposed in this paper, each segmented sample image is normalized into the same size 15 × 6 (height × width) pixels. The grey value of every pixel is the input vector of each node in input layer, so there are 90 input nodes in the BP neural network. Furthermore, the candidate characters for recognition are the digital numbers 0–9. Therefore there are 10 nodes in output layer. In the convergence of the learning and training process, designing 10 nodes in the hidden layer and setting the initial weights of neural network is a random number between −1 and +1.

In the recognition test, the marking information areas of billets are captured online by the monochrome camera with different filters and under the illumination of the metal halide lamp. About 150 frames are segmented for recognizing the marked string. Table 2 presents recognition results of the marked string on billets. Furthermore, to check the character recognition efficiency of the proposed system, 80 frames captured with BP550 and BP590 filters are selected and segmented for identifying the single characters 0–9. The results are recorded in Table 3.

Table 2 Recognition accuracy and time consumption with different filters
Table 3 Recognition accuracy and consumption time for single character

From Table 2, it can be concluded that although the same image segmentation and recognition algorithm is applied, the identifying differences are significant for images captured in different situations. The recognition accuracy is higher for images captured with some filters, e.g., BP550, BP590, etc., which let the wavelength of light around the colour of the marked information fully pass and block out much of the background light. However, the recognition rate is very low for some images captured with filters mainly passing ultraviolet or infrared, e.g., BP324, BP850, etc., and some even cannot be segmented and identified, e.g., images captured with filter BP660. Consequently, to get a better segmentation and recognition effects, the performances of the lighting and filters should be matched with the property of the marked information on steel materials. According to the statistical results in Table 3, the recognition accuracy of a single character is higher than 95%, and the consumption time of recognition is about 60 ms by using BP550 and BP590 filters. It suggests the proposed character recognition algorithm could effectively and efficiently identify the digital characters marked on the billets.

5 Conclusions

Due to the complex manufacturing processes and long production cycle in steel industry, it is valuable for steelworks to improve production efficiency, reduce costs, and strengthen the market competitiveness of enterprises with the intelligent manufacturing system. In this paper, the label characters marking and management method for steel materials have been studied. Furthermore, a kind of steel products coding, management and tracking framework in different manufacturing stages has been proposed. By this way, the full life cycle of a product, from the billet manufacturing to the final products, can be traced with the marked information identifying and tracking system.

The manufacturing environment of steel plants has many unfavourable factors for label information marking and identifying, such as high temperature and inaccessibility, so automatically printing the information on steel materials and recognizing it are very necessary, especially for hot steel materials. In this paper, a kind of online marking and machine vision identifying method has been provided, and a set of images acquisition system has been designed to get higher quality images according to a variety of detection and recognition situations.

With the proposed method, some experiments have been done in BaoSteel Steelmaking Plant to recognize the marked information on hot billets. The results show that the differences of images quality are significant in different capturing situations, e.g., different temperature, with different filters, etc. If the performances of the image capturing devices match with the property of marked information, it can improve the images contrast between the marked information and the steel materials background, which is helpful for images segmentation and characters recognition. Moreover, after the captured images are pre-processed, segmented, and identified with the proposed method, the results also suggest that the enhancement operation can partly rise up the accuracy of image segmentation. The proposed character recognition algorithm can effectively and efficiently identify the digital characters marked on the billets. Finally, the proposed methods and framework have provided a helpful exploration to obtain fundamental data with machine vision for the intelligent manufacturing system in steelworks. However, there are many kinds of marker information printing methods for different products in steel plants, for instance, the marking machines, the colour and character formats of the marked information are quite different, so there are still some challenges for the label characters marking and recognition. In the future study, it will be very important to develop the robust recognition algorithms for different kinds of marked information and set up a set of adaptive image capturing equipment suitable for high temperature, dusty environment.