ABSTRACT
The rapid progression of autonomous vehicles (AVs) necessitates the implementation of advanced safety measures to ensure their dependable and secure operation in a variety of environments. A critical element in enhancing AV safety is the precise classification and prediction of the behavior of surrounding objects, such as other vehicles, pedestrians, cyclists, and static obstacles. This project is dedicated to developing a robust object behavior classification framework utilizing machine learning and deep learning techniques. By integrating sensor data from radar and cameras, the proposed system processes real-time environmental information to accurately identify and categorize object behaviors. The project underscores the importance of feature extraction, data fusion, and model optimization in achieving high classification accuracy. Moreover, it addresses challenges related to dynamic environments, occlusions, and varying weather conditions, providing solutions to improve the robustness of the classification system.
STEP 1: Design simulation scenarios
To design the simulation scenarios, the Automated Driving Toolbox is required. The driving scenario designer is used to design scenario test cases, configure sensors, and generate synthetic object detections. Driving Scenario Designer app helps us to create artificial driving scenarios for testing autonomous vehicles. The test scenarios are developed using the Driving Scenario Designer app which allows the users to create a scenario by drag and drop interface which will enable them to place roads, vehicles, pedestrians, and other actors.
.png)
STEP 2: Create a versatile scenario dataset to assess object behavior
A versatile dataset containing both safe and risky scenario test cases is required for the training and testing of the machine-learning model. So, 21 scenario cases have been developed for the implementation of this model.
Safe scenario:
.png)
Risky scenario:
.png)
STEP 3: Mounting the sensor on the ego vehicle
Sensors, such as radar and camera, are to be mounted on the ego vehicle to acquire the data. These sensors are placed in such a way that they get maximum coverage of the whole scenario. These sensors acquire the data from the scenario test cases and are stored in a MATLAB function from which the sensor data can be extracted.
.png)
A Simulink model is formed with this scenario test cases.
.png)
The scenario environment is visible clearly in the bird's eye plot wherein we get a clear view of the scenario, including the sensors.
STEP 4: Acquiring sensor data
This scenario data is then exported into a MATLAB function. This MATLAB function contains all the data of the sensor and scenario readings. The sensor and scenario data from all the test cases are loaded into a single file to implement the machine-learning algorithms.
The Scenario Reader app extracts the positions of actors and roads from scenario files created with the Driving Scenario Designer app. It then outputs this information in the coordinate system of either the ego vehicle or the world coordinate system.
Scenario MATLAB function:
function [allData, scenario, sensors] = safe1()
%safe1 - Returns sensor detections
% allData = safe1 returns sensor detections in a structure
% with time for an internally defined scenario and sensor suite.
%
% [allData, scenario, sensors] = safe1 optionally returns
% the drivingScenario and detection generator objects.
% Generated by MATLAB(R) 23.2 (R2023b) and Automated Driving Toolbox 23.2 (R2023b).
% Generated on: 25-Apr-2024 23:14:11
% Create the drivingScenario object and ego car
[scenario, egoVehicle] = createDrivingScenario;
% Create all the sensors
[sensors, numSensors] = createSensors(scenario);
allData = struct('Time', {}, 'ActorPoses', {}, 'ObjectDetections', {}, 'LaneDetections', {}, 'PointClouds', {}, 'INSMeasurements', {});
running = true;
while running
% Generate the target poses of all actors relative to the ego vehicle
poses = targetPoses(egoVehicle);
time = scenario.SimulationTime;
objectDetections = {};
laneDetections = [];
ptClouds = {};
insMeas = {};
isValidTime = false(1, numSensors);
% Generate detections for each sensor
for sensorIndex = 1:numSensors
sensor = sensors{sensorIndex};
[objectDets, isValidTime(sensorIndex)] = sensor(poses, time);
numObjects = length(objectDets);
objectDetections = [objectDetections; objectDets(1:numObjects)]; %#ok<AGROW>
end
% Aggregate all detections into a structure for later use
if any(isValidTime)
allData(end + 1) = struct( ...
'Time', scenario.SimulationTime, ...
'ActorPoses', actorPoses(scenario), ...
'ObjectDetections', {objectDetections}, ...
'LaneDetections', {laneDetections}, ...
'PointClouds', {ptClouds}, ... %#ok<AGROW>
'INSMeasurements', {insMeas}); %#ok<AGROW>
end
% Advance the scenario one time step and exit the loop if the scenario is complete
running = advance(scenario);
end
% Restart the driving scenario to return the actors to their initial positions.
restart(scenario);
% Release all the sensor objects so they can be used again.
for sensorIndex = 1:numSensors
release(sensors{sensorIndex});
end
%%%%%%%%%%%%%%%%%%%%
% Helper functions %
%%%%%%%%%%%%%%%%%%%%
% Units used in createSensors and createDrivingScenario
% Distance/Position - meters
% Speed - meters/second
% Angles - degrees
% RCS Pattern - dBsm
function [sensors, numSensors] = createSensors(scenario)
% createSensors Returns all sensor objects to generate detections
% Assign into each sensor the physical and radar profiles for all actors
profiles = actorProfiles(scenario);
sensors{1} = visionDetectionGenerator('SensorIndex', 1, ...
'SensorLocation', [0.0400000000000003 -0.19], ...
'Yaw', 6.60483549675397, ...
'DetectorOutput', 'Objects only', ...
'Intrinsics', cameraIntrinsics([800 800],[320 240],[480 640]), ...
'ActorProfiles', profiles);
sensors{2} = drivingRadarDataGenerator('SensorIndex', 2, ...
'MountingLocation', [-0.18 -0.04 0.2], ...
'MountingAngles', [4.65335158726839 0 0], ...
'TargetReportFormat', 'Detections', ...
'Profiles', profiles);
numSensors = 2;
function [scenario, egoVehicle] = createDrivingScenario
% createDrivingScenario Returns the drivingScenario defined in the Designer
% Construct a drivingScenario object.
scenario = drivingScenario;
% Add all road segments
roadCenters = [41.4 2.5 0;
5.6 1.1 0];
laneSpecification = lanespec(2);
road(scenario, roadCenters, 'Lanes', laneSpecification, 'Name', 'Road');
% Add the ego vehicle
egoVehicle = vehicle(scenario, ...
'ClassID', 1, ...
'Position', [36.5 3.2 0], ...
'Mesh', driving.scenario.carMesh, ...
'Name', 'Car');
waypoints = [36.5 3.2 0;
28.2 2.5 0;
17.5 3.3 0];
speed = [30;30;30];
trajectory(egoVehicle, waypoints, speed);
% Add the non-ego actors
car1 = vehicle(scenario, ...
'ClassID', 1, ...
'Position', [17.9 -0.2 0], ...
'Mesh', driving.scenario.carMesh, ...
'Name', 'Car1');
waypoints = [17.9 -0.2 0;
29.3 0 0;
34.6 -0.2 0];
speed = [30;30;30];
trajectory(car1, waypoints, speed);
pedestrian = actor(scenario, ...
'ClassID', 4, ...
'Length', 0.24, ...
'Width', 0.45, ...
'Height', 1.7, ...
'Position', [23.4 5.1 0], ...
'RCSPattern', [-8 -8;-8 -8], ...
'Mesh', driving.scenario.pedestrianMesh, ...
'Name', 'Pedestrian');
waypoints = [23.4 5.1 0;
30 4.5 0;
32.7 5.1 0];
speed = [1.5;1.5;1.5];
trajectory(pedestrian, waypoints, speed);
Loading data into a single file and labeling the data:
All the scenario data sets are loaded into a single file for the study of the sensor data and to implement the machine learning algorithm on the loaded data. So, for the implementation, we need to label the data into safe and risky data for training purposes.
clc;
clear;
close all;
% Initialize empty arrays for features and labels
allObjectDetections = [];
allLaneDetections = [];
% ... Repeat for other fields (e.g., PointClouds, INSMeasurements)
allLabels = [];
% Load data from all 21 scenarios
for i = 1:21
filename = sprintf('s%d.mat', i);
loadedData = load(filename);
% Extract relevant fields
objectData = loadedData.ObjectDetections;
laneData = loadedData.LaneDetections;
% ... Repeat for other fields
% Combine data from each scenario
allObjectDetections = [allObjectDetections; objectData];
allLaneDetections = [allLaneDetections; laneData];
% ... Repeat for other fields
% Assign labels (safe: 1, risky: 2)
if i <= 11
labels = ones(size(objectData, 1), 1); % Safe
else
labels = 2 * ones(size(objectData, 1), 1); % Risky
end
% Combine labels
allLabels = [allLabels; labels];
end
% Now 'allObjectDetections', 'allLaneDetections', and 'allLabels' contain the combined data
% Proceed with training your model using the labeled dataset
Step 5: Apply machine learning algorithms for training and testing
The object classification system operates by continuously monitoring the sensor's readings and bird’s eye view plot to detect sudden changes in the movement of the other actors in the scenario which indicate a potential accident or collision. For this classification model, a machine learning model is implemented on this sensor data and scenario data. For applying machine-learning algorithms in the MATLAB, Statistics and Machine-Learning Toolbox is required. The in-built functions for various machine-learning algorithms are used to implement this model.
The data is then divided randomly with 80% of train data and 20% of test data for implementing the model. Here, we have used SVM and logistic regression for this model and checked the accuracy during the testing phase.
clc;
clear;
close all;
alldata = [];
allLabels = [];
for i = 1:21
filename = sprintf('s%d.mat', i);
loadedData = load(filename);
data = loadedData.data;
alldata = [alldata; data];
if i <= 11
labels = zeros(size(data, 1), 1); % Safe
else
labels = ones(size(data, 1), 1); % Risky
end
allLabels = [allLabels; labels];
end
[trainInd,~,testInd] = dividerand(size(alldata,1), 0.8, 0, 0.2);
trainData = alldata(trainInd,:);
trainData = arrayfun(@(x) x.BirdsEyePlot.UnitsPerPixel(1,1), trainData);
trainLabels = allLabels(trainInd);
testData = alldata(testInd,:);
testData = arrayfun(@(x) x.BirdsEyePlot.UnitsPerPixel(1,1), testData);
testLabels = allLabels(testInd);
SVMModel = fitcsvm(trainData, trainLabels);
[label, score] = predict(SVMModel, testData);
accuracy = sum(label == testLabels) / length(testLabels);
disp(['SVM Accuracy: ', num2str(accuracy)]);
mdl = fitglm(trainData, trainLabels, 'Distribution', 'binomial');
% Make predictions on the test data
predictedProbabilities = predict(mdl, testData);
disp(predictedProbabilities)
%predictedProbabilities = 1 ./ (1 + exp(-predictedProbabilities)); % Apply sigmoid function
threshold = 0.5;
predictedLabels = (predictedProbabilities >= threshold); % Threshold of 0.5
% Evaluate accuracy
accuracy = sum(predictedLabels == testLabels) / length(testLabels);
disp(['Logistic Regression Accuracy: ', num2str(accuracy)]);
The output for the Support Vector Machine (SVM) varies between 0.25 and 0.5 and for the logistic regression model is 1. As the logistic regression model has high accuracy, we have used the logistic regression model.
.png)
STEP 6: Testing of the model
For testing purposes, a new test scenario test case is developed and then it is checked for the object behavior classification. Here, for an example test case, a risky scenario test case is developed and our object behavior classification model is implemented on it.
Test scenario:
.png)
CODE:
clc;
clear;
close all;
alldata = [];
allLabels = [];
for i = 1:21
filename = sprintf('s%d.mat', i);
loadedData = load(filename);
data = loadedData.data;
alldata = [alldata; data];
if i <= 11
labels = zeros(size(data, 1), 1); % Safe
else
labels = ones(size(data, 1), 1); % Risky
end
allLabels = [allLabels; labels];
end
[trainInd,~,testInd] = dividerand(size(alldata,1), 0.8, 0, 0.2);
trainData = alldata(trainInd,:);
trainData = arrayfun(@(x) x.BirdsEyePlot.UnitsPerPixel(1,1), trainData);
trainLabels = allLabels(trainInd);
testData = alldata(testInd,:);
testData = arrayfun(@(x) x.BirdsEyePlot.UnitsPerPixel(1,1), testData);
testLabels = allLabels(testInd);
SVMModel = fitcsvm(trainData, trainLabels);
[label, score] = predict(SVMModel, testData);
accuracy = sum(label == testLabels) / length(testLabels);
disp(['SVM Accuracy: ', num2str(accuracy)]);
mdl = fitglm(trainData, trainLabels, 'Distribution', 'binomial');
% Make predictions on the test data
predictedProbabilities = predict(mdl, testData);
disp(predictedProbabilities)
threshold = 0.5;
predictedLabels = (predictedProbabilities >= threshold); % Threshold of 0.5
% Evaluate accuracy
accuracy = sum(predictedLabels == testLabels) / length(testLabels);
disp(['Logistic Regression Accuracy: ', num2str(accuracy)]);
% Load the testing data
loadedData2 = load("test.mat");
% Check the structure of loadedData2
disp('Structure of loadedData2:');
disp(loadedData2);
% Adjust the indexing based on the structure of loadedData2
% Assuming the data structure is similar to the training data
testData2 = loadedData2.data.BirdsEyePlot.UnitsPerPixel(1,1);
predictedProbabilities = predict(mdl, testData2);
disp(predictedProbabilities)
threshold = 0.5;
predictedLabels = (predictedProbabilities >= threshold);
if predictedLabels == 0
disp("Safe Scenario")
else
disp("Risky Scenario")
end
Output:
As in the below output figure, we see that the object behavior classification model has returned the output as "risky scenario".
.png)
An efficient object behavior classification model has been developed for enhancing the safety of the autonomous vehicles. This helps us in the safe navigation of autonomous vehicles in difficult situations and addresses challenges related to dynamic environments, occlusions, and varying weather conditions, providing solutions to improve the robustness of the classification system.