Skip to content

week-end-manufacture/ios-adas-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Activity Contributors Forks Stargazers Issues MIT License


Logo

ios-adas-app

A deep learning for advanced driver assistance system application project!
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

Product Name Screen Shot

Through a deep learning model learned through a CNN (Convolution Neural Network), an input image input through a camera module of a mobile device is intended to implement a distance alarm from the vehicle in front of it and an alarm function when a lane is changed.

Here's why:

  • The camera of the smartphone and Convolution Neural Network technology, which is mainly used to process images in deep learning, can be used in various fields
  • The goal of this project is to implement the ability to process data entered through a mobile device, such as a black box, through a deep learning model to alert the driver

Conclusions:

  • Through this project, it was possible to implement a mobile application by exporting a CNN model that detects a vehicle while driving to a mobile device
  • When implemented through a deep learning model suitable for a mobile environment, sufficient data sets are required to implement the target function, and tests are specifically required in the actual environment to measure variables between function implementations
  • When designed and implemented through object segmentation, it is judged that it will be easy to interact and interpret each object if its performance is valid in a mobile environment

(back to top)

Built With

  • Swift
  • Python
  • Tensorflow
  • C++
  • OpenCV

(back to top)

Getting Started

To get a local copy up and running follow these simple example steps.

Prerequisites

This is an example of how to list things you need to use the software and how to install them.

  • The Object Detection API, pycocotools, and TF Slim packages binding

    bash object_detection/dataset_tools/create_pycocotools_package.sh /tmp/pycocotools
    python setup.py sdist
    (cd slim && python setup.py sdist)
  • Training on google cloud using TPU

    gcloud ml-engine jobs submit training `whoami`_object_detection_`date +%s` \
    --job-dir=gs://${YOUR_GCS_BUCKET}/train \
    --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
    --module-name object_detection.model_tpu_main \
    --runtime-version 1.13 \
    --scale-tier BASIC_TPU \
    --region us-central1 \
    -- \
    --model_dir=gs://${YOUR_GCS_BUCKET}/train \
    --tpu_zone us-central1 \
    --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/data/pipeline.config
  • Training on google cloud using GPU

    • config.yaml
      trainingInput:
        scaleTier: CUSTOM
        # Configure a master worker with 4 K80 GPUs
        masterType: complex_model_m_gpu
        # Configure 9 workers, each with 4 K80 GPUs
        workerCount: 9
        workerType: complex_model_m_gpu
        # Configure 3 parameter servers with no GPUs
        parameterServerCount: 3
        parameterServerType: large_model
      
    • run
      gcloud ml-engine jobs submit training object_detection_`date +%m_%d_%Y_%H_%M_%S` \
          --python-version 3.5 \
          --runtime-version 1.13 \
          --job-dir=gs://${YOUR_GCS_BUCKET}/train \
          --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
          --module-name object_detection.model_main \
          --region us-central1 \
          --config /tmp/config.yaml \
          -- \
          --model_dir=gs://${YOUR_GCS_BUCKET}/train \
          --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/data/pipeline.config
  • Training on local using GPU

    python object_detection/model_main.py \
        --pipeline_config_path=${YOUR_LOCAL_PATH}/data/pipeline.config \
        --model_dir=${YOUR_LOCAL_PATH}/train/ \
        --num_train_steps=200000 \
        --sample_1_of_n_eval_examples=1 \
        --alsologtostderr
  • Evaluating on google cloud

    gcloud ml-engine jobs submit training `whoami`_object_detection_eval_validation_`date +%s` \
    --job-dir=gs://${YOUR_GCS_BUCKET}/train \
    --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
    --module-name object_detection.model_main \
    --runtime-version 1.13 \
    --scale-tier BASIC_GPU \
    --region us-central1 \
    -- \
    --model_dir=gs://${YOUR_GCS_BUCKET}/train \
    --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/data/pipeline.config \
    --checkpoint_dir=ggs://${YOUR_GCS_BUCKET}/train
  • Turn on tensorboard

    • google cloud.

      tensorboard --logdir=gs://${YOUR_GCS_BUCKET}/train
    • local.

      tensorboard --logdir=${YOUR_LOCAL_PATH}/train

Installation

Perform the following procedure to install the required package.

  1. Clone the repo
    git clone https://github.com/week-end-manufacture/ios-adas-app.git
  2. Operating pb file generator
     python object_detection/export_tflite_ssd_graph.py \
     --pipeline_config_path=$CONFIG_FILE \
     --trained_checkpoint_prefix=$CHECKPOINT_PATH \
     --output_directory=$OUTPUT_DIR \
     --add_postprocessing_op=true
  3. Bazeling to make tflite file
    bazel run -c opt tensorflow/lite/toco:toco -- \
    --input_file=$OUTPUT_DIR/tflite_graph.pb \
    --output_file=$OUTPUT_DIR/detect.tflite \
    --input_shapes=1,300,300,3 \
    --input_arrays=normalized_input_image_tensor \
    --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'  \
    --inference_type=QUANTIZED_UINT8 \
    --mean_values=128 \
    --std_values=128 \
    --change_concat_input_ranges=false \
    --allow_custom_ops

(back to top)

Usage

For more examples, please refer to the Documentation

(back to top)

Roadmap

  • Traning cnn model
  • Add object detection function
  • Add lane detection function

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/amazing-feature)
  3. Commit your Changes (git commit -m 'feat: Add some amazing-feature')
  • commit message
    <type>[optional scope]: <description>
    
    [optional body]
    
    [optional footer(s)]
    
  • commit type
    - feat: a commit of the type feat introduces a new feature to the codebase
    - fix: a commit of the type fix patches a bug in your codebase
    
  1. Push to the Branch (git push origin feature/amazing-feature)
  2. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

JO HYUK JUN - hyukzuny@gmail.com

Project Link: https://github.com/week-end-manufacture/ios-adas-app

(back to top)

Acknowledgments

(back to top)

About

A deep learning for advanced driver assistance system application project

Topics

Resources

License

Stars

Watchers

Forks