A real-time deep-learning model is proposed to classify the number of shale drill pieces on offshore drilling using real-time monitoring video stream analysis. Unlike the traditional, time-consuming approach to video analysis, the proposed model can implement real-time classification and achieve exceptional accuracy. The methodology consists of three modules. Compared to results manually tagged by engineers, the model can achieve very accurate real-time results without image dropouts.
There is a complete workflow for many oil and gas companies to help maintain and clean wells. A well-designed workflow can help promote good integrity and reduce drilling risks and costs. The traditional method requires human observation of shale mining, a hydraulic model, and a torque and traction model; the operation involves several cleaning cycles. This continuous manual monitoring of the cutting volume on the slate shaker has become an obstacle in the traditional workflow and cannot provide a consistent assessment of the pit cleaning status because human labour is not always available, and the torque and pull operation is discreet, consisting of a break between two cycles.
Most previous work has used image analysis techniques to perform a quantitative analysis of the number of slices. The traditional method of image processing requires considerable functional engineering work. Because raw data is often noisy with missing components, preprocessing and adding data play an essential role in streamlining and productivity the learning model. On the other hand, the deep learning framework automatically detects the representations needed to detect functions or classify raw data. This will help overcome difficulties in setting up and operating the equipment in harsh environments, and the need to collect data for an offshore volume monitoring system can be alleviated.
This study aimed to demonstrate the possibility of building a system for automatically monitoring the cutting volume in real-time at a remote location with limited bandwidth for data transmission. Minimum hardware requirements for data collection include the following:
- Uncalibrated charger camera
- Unlike light sources
- Low bit rate transmission
- Image processing unit without graphics unit support (e.g. notebook)
A deep neural network (DNN) has been adopted, which allows image processing and classification of cutting volumes of a shale shaker at a distance from the drilling rig. Specifically, the described model implemented convolutional neural networks as extractors of functions and classifications. The main benefits of this study can be summarised as follows:
- A detailed learning framework that can classify the number of stitches in real-time
- A real-time video analysis system that requires minimal hardware setup effort and is capable of processing low-resolution images
- Object search function to automatically find the area covered by the slice
- Multi-threaded video encoder/decoder implemented to improve real-time streaming video processing
Real-time slice tracking system overview
The workflow consists primarily of the following sub-processes: real-time video processing (decoding and encoding), proposed area of interest (ROI), and data preprocessing and depth classification. During the drilling process, mudslides are performed by vibrating slate shakers. The authors have developed an intelligent video processing machine that analyses videos obtained when the cuts have been transferred to a shaker. The analysis results are sent in real-time and presented on the office monitor, so the drilling technician can quickly obtain instant information about the volume of the cut. In real-time, interim conclusions (classification) results can be used as histograms for further analysis.
The real-time video processing module is designed to adapt to a dynamic drilling environment. Real-time cutting volume control on the slate shaker is integral to overall drilling risk management.
Video Frame Capture. The double-threaded mechanism is used to read and write the source stream in real-time. The decoding process must be done adaptively because the server controls the video stream constantly. If the decoding process can’t keep up with the video streaming speed, there is a possibility that synchronization will be lost and frames will be skipped. A fast, thread-safe circular buffer has been implemented to overcome this hurdle.
Return on investment proposal. Users (both engineers and developers) must provide ROI to display the area where the slices pass through the shaker to ensure consistent conclusion results. The learning model described it would pay attention to this ROI and obtain input data that is less diverse. After resolving the ROI, the camera does not change its position or angle. ROI filters out many sounds that disrupt classification. Manual or automatic access can be used to facilitate ROI selection. Before decoding the video stream, the interactive graphical user interface (GUI) displays a box indicating the shaker position to the user. The user can highlight the ROI by simply selecting the four vertices of the first frame shown in the GUI.
However, manually selecting a region requires repetitive work. With the shaker, the workers can slightly or unintentionally change the camera angle when drilling. The camera angle may be different for different shakers.
A faster convolution neural network-based ROI area detection method (faster R-CNN) can detect the stitch area to automate this method. The raw video frame is used as input, and the ROI is manually tagged using a bound frame. Each basic image is inserted into a function extractor, which generates a function folder. The function card is introduced into a smaller convolutional neural network, which takes the function card as inputs and outputs for regional designs. These designs are embedded in a classification that classifies structures by background class or ROI class. When a regional proposal is classified according to the ROI class, its coordination, latitude and longitude are further adjusted by the regional repressor. Backpropagation was used to train the model.
The authors used 50 videos for training and four images for testing. When training results for cutting area detection were considered, classification loss was reduced and consolidated to approximately 1,800 training steps. The loss of localisation increases during the beginning of the training but gradually decreases and combines about 2000 training steps.
Randomised subsampling within ROI. The user selects the ROI at the beginning of the video stream or automatically by the cutting area detector based on the faster R-CNN frame. However, vibration or wind can affect the position and angle of the camera and reduce performance ratings when the system is charged without a proper operating load. This study proposed a random subsampling strategy using a pile of small image patches to overcome this problem. Image patches are broadly sampled from ROI. Instead of using total ROI as an input to DNN, a pile of image patches is brought to DNN. Bleaching transformation using principal component analysis (PCA). The PCA bleach change was immediately applied to the video frames before being brought to DNN. The aim is to make the entry less redundant. Changing PCA bleaching removes basic correlations from neighbouring images and can improve model coherence.
Experimentation and performance evaluation
To evaluate performance, the proposed method was tested with a live video stream, and the results of the real-time classification were compared with manual annotation. Based on the procedures used by the kit designers to control the backflow of the cuts in real-time, the cut volume is divided into four different levels: heavier, heavier, lighter. And none. Each video was marked by four experts (the essential truth mark represents the agreement of the experts). Test results show that the system can handle live video streaming without dropping frames. The proposed DNN successfully classifies all classes. The result shows that the proposed model achieves a significant increase in performance compared to the performance of traditional networks.