Projects


    1. Development of the Video Scene Analysis Technology for Video Recommendation System (비디오 추천 시스템을 위한 비디오 장면 분석 기술 개발), Dec. 2018 – Dec. 2019
      [PI], Funded by Gachon University, Korea
    2. Development of 3DoF+ 360-degree Video System for Immersive VR services (몰입형 VR 서비스를 위한 3DoF+ 360 비디오 표준화 연구), Jun. 2018 – May. 2019
      [PI], Funded by LG Electronics Research
    3. Development of Compression and Transmission Technologies for Ultra High Quality Immersive Videos Supporting 6DoF (6DoF지원 초고화질 몰입형 비디오의 압축 및 전송 핵심 기술 개발), Jul. 2018 – Dec. 2020
      [PI], Funded by Institute for Information & communications Technology Promotion (IITP)
    4. Personalized Media Communication, Jan. 2018 – Dec. 2018
      [PI], Funded by InterDigital, USA
    5. Development of Healthcare for Senior with Artificial Intelligence Technology (인공지능기술 기반 시니어 헬스케어 기술 개발), Jul. 2017 – Jun. 2020
      (Collaborative research, PI: Prof. Taegkeun Whangbo, Department of Computer Engineering, Gachon University), Funded by Gyunggi Regional Research Center(GRRC)
    6. Development of Brain Disease Prediction/Prevention Technology using Medical Big Data and Human Resource Development Program (의료 빅데이터를 활용한 뇌질환 예측 · 예방 기술개발 및 전문인력 양성), Jun. 2017 – Dec. 2020
      (Collaborative research, PI: Prof. Taegkeun Whangbo, Department of Computer Engineering, Gachon University), Funded by the Ministry of Science, ICT & Future Planning(ITRC)
    7. Reference SW Development for Viewport Dependent 360 Video Processing (360 비디오의 사용자 뷰포트 기반 프로세싱을 위한 레퍼런스 SW 개발), Jun. 2017 – Mar. 2018
      [PI], Funded by LG Electronics Research
    8. Development of Tiled Streaming Technology For High Quality VR Contents Real-Time Service (고품질 VR 콘텐츠 실시간 서비스를 위한 분할영상 스트리밍 기술 개발), Jun. 2017 – Dec. 2019
      [PI], Funded by the Ministry of Science, ICT & Future Planning
    9. Development of Multi-sensor Intelligent Edge Camera System (전력설비 고장 감시/사전진단을 위한 다중센서 융합 지능형 AV디바이스 및 플랫폼 개발), May. 2017 – Apr. 2020
      [PI], Funded by the Korea Electric Power Corporation Research Institute
    10. Haptic Video Conferencing System for Individuals with Visual Impairments, Jul. 2015 – Jun. 2018
      [PI], Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2015R1C1A1A02037743) 
    11. Haptic Telepresence System for Individuals with Visual Impairments, Apr. 2015 – Mar. 2017
      [PI], Funded by Gachon University, Korea 
    12. Sensor Networking Protocols for Emergency Data Collection, Jun. 2016 – Nov. 2016
      (Collaborative research, PI: Prof. Sungrae Cho, Ubiquitous Computing Lab., Chung-Ang University, Seoul, Korea), Funded by Electronics and Telecommunications Research Institute (ETRI), Korea
    13. Commercialization of smartphone/PC compatible mobile Braille pad and content production/service system, Aug. 2016 – Aug. 2019 
      (Collaborative research, PI: Prof. Jinsoo Cho, Department of Computer Engineering, Gachon University), Funded by Commercializations Promotion Agency for R&D Outcomes, Korea 

     

    Research


    Overall Research Goal: Merciless Video Processing (MVP)
    : Video decoding speed-up for mobile VR by using Tiled-SHVC as well as asymmetric mobile CPU multicores.

    mvp


    1. [Video System] HEVC Parallel Processing for Asymmetric Multicore Processors

    1.1. Tile Partitioning-based HEVC Parallel Processing Optimization

    fig3-eps-converted-to

    Recently, there is an emerging need for parallel UHD video processing, and the usage of computing systems that have an asymmetric processor such as ARM big.LITTLE is actively increasing. Thus, a new parallel UHD video processing method that is optimized for the asymmetric multicore systems is needed.
    This study proposes a novel HEVC tile partitioning method for parallel processing by analyzing the computational power of asymmetric multicores. The proposed method analyzes (1) the computing power of asymmetric multicores and (2) the regression model of computational complexity per video resolution. Finally, the model (3) determines the optimal HEVC tile resolution for each core and partitions/allocates the tiles to suitable cores.
    The proposed method minimizes the gap in the decoding time between the fastest CPU core and the slowest CPU core. Experimental results with the 4K UHD official test sequences show average 20% improvement in the decoding speedup on the ARM asymmetric multicore system.

    fig1-eps-converted-tofig2-eps-converted-to

     

     

     1.2. Prediction Complexity-based HEVC Parallel Processing Optimization

    Prediction Complexity-based HEVC Parallel Processing Optimization

    We also study a new HEVC Tile allocation method considering the computational ability of asymmetric multicores as well as the computational complexity of each Tile.
    The computational complexity of each Tile can be measured using the amount of HEVC prediction unit (PU) partitioning.
    Our implemented system (1) counts and sorts the amount of PU partitioning of each Tile and (2) allocates Tiles to asymmetric big/LITTLE cores according to their expecting computational complexity. 4K PeopleOnStreet test sequence, thee coding structures such as random access (RA), all intra (AI), and low-delay B (LDB) defined in the common test condition (CTC) of HEVC standard, and 6 multicores consists of 2 big cores and 4 little cores were used for experiments.
    When experiments were conducted, the amount of PU partitioning and the computational complexity (decoding time) show a close correlation, and average performance gains of decoding time were 5.24% for 6 tiles and 8.44% for 12 tiles, respectively. The proposed method with adaptive allocation shows the average performance 18.03% as well.

    한글요약:

    최근 비디오 시스템은 초고해상도 영상의 사용으로 병렬처리의 필요성이 대두되고 있고, 시스템은 ARM big.LITTLE 같은 비대칭 처리능력을 지닌 컴퓨팅 시스템이 도입되고 있다. 따라서, 이 같은 비대칭 컴퓨팅 환경에 최적화된 초고해상도 UHD 비디오 병렬처리 기법이 필요한 시점이다.
    본 연구는 인코딩/디코딩시에 비대칭 컴퓨팅 환경에 최적화 된 HEVC 타일(Tile) 분할 기법을 제안한다. 제안하는 방식은 (1) 비대칭 CPU 코어들의 처리능력과 (2) 비디오 크기별 연산 복잡도 분석 모델을 분석하여, (3) 각 코어에 최적화된 크기의 타일을 할당함으로써, 처리속도가 빠른 CPU 코어와 느린 코어의 인코딩/디코딩 시간차를 최소화한다.
    이를 ARM기반의 비대칭 멀티코어 플랫폼에서 4K UHD 표준 영상을 대상으로 실험하였을 때, 평균 약 20%의 디코딩 시간 개선이 발생함을 확인하였다.
    또다른 방법으로, 우리는 HEVC의 Tile 사이즈가 인코딩시에 이미 동일하게 나뉘어져있을 경우도 연구한다. 이 연구는 (1) Prediction Unit (PU)의 Partitioning 횟수를 분석함으로써 Tile별 연산 복잡도를 계산하고, (2) 이 복잡도 기반으로 Big/LITTLE의 Asymmetric cores에 각 Tile을 할당하여 디코딩을 하는데, HEVC 표준화 활동의 기본 실험 조건에 맞추어 4K PeopleOnStreet 테스트 컨텐츠를 대상으로 실험한 결과 6개의 Tile을 대상으로는 5.24%, 12개의 Tile을 대상으로는 8.44%의 디코딩 속도 향상을 확인하였다. 추가적으로 코어에 적응적인 Tile 할당을 수행한 결과, 18.03%의 디코딩 속도 향상을 확인하였다.


    2. [Cyber Physical System(CPS)] Haptic Telepresence System for the Individuals with Visual Impairments

    haptic Telepresence

    This study proposes a novel video conferencing system for individuals with visual impairments by using an RGB-D sensor and haptic device. The recent improvement on RGB-D sensors has enabled real-time access on 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence. Thus, the proposed system addresses the telepresence of remote 3D information using an RGB-D sensor through video encoding and 3D depth-map enhancement by utilizing both 2D image and depth-map. In our implemented system, the Kinect sensor from Microsoft is an RGB-D sensor that provides depth and color images at a rate of approximately 30 fps. The Kinect depth data frame is buffered, projected into a 3D coordinate system with resolution 640 by 480, and then transformed into a 3D map structure.
    To verify the benefits of the proposed video content adaptation method for individuals with visual impairments, this study conducts 3D video encoding and user testing. In conclusion, the proposed system provides a new haptic telepresence system for individuals with visual impairments by providing an enhanced interactive experience.

    Haptic Telepresence System
    haptic

    한글요약:

    정보통신 기술의 발달과 스마트폰의 보급으로 일반 사용자들은 언제, 어디서나 가족 및 주변 사용자들의 모습을 바라보며 통화를 할 수 있게 되었고, 원하는 영상이나 사진을 감상할 수 있다. 하지만 시각장애인들은 이들을 위한 연구 및 사회적 인프라의 부족으로 인하여 이와 같은 서비스들의 제공 대상에서 늘 제외되어 왔다. 이러한 사항을 개선하기 위하여 본 연구는 시각장애인들을 위한 새로운 방식의 촉각TV 시스템을 제안한다. 제안하는 시스템은 크게 3D 캡쳐(Capture) 기술, 실시간 전송/스트리밍 기술, 햅틱(Haptic) 장치 및 액츄에이터(Actuator)제어 기술로 구성되며, 이동이 불편한 시각 장애인이 제한적 기능일지라도 원격의 가족 얼굴 윤곽을 느끼고 인식하고 TV 및 사진감상을 가능하게 하려는 노력이다. 현재는 2D Braille (점자) Pad 개발로 본 연구를 확장하고 있다.


    3. [Video Standard] Viewport Dependent 360-Degree Video Streaming

    360-degree video streaming for virtual reality (VR) is emerging. However, the computational ability and bandwidth of the current VR are limited when compared to the high-quality VR. To overcome these limits, we proposes a new viewport dependent streaming method that transmits 360-degree videos using the high efficiency video coding (HEVC) and the scalability extension of HEVC (SHVC). The proposed SHVC and HEVC encoders generate the bitstream that can transmit tiles independently. The proposed extractor extracts the bitstream of the tiles corresponding to the viewport from the bitstream generated by the proposed encoder. SHVC video bitstream extracted by the proposed methods consist of (i) an SHVC base layer (BL) which represents the entire 360-degree area and (ii) an SHVC enhancement layer (EL) with region of interest (ROI) tiles. When the proposed HEVC encoder is used, low resolution and high resolution sequences are separately encoded and serve as BL and EL. By transmitting the BL(low resolution) and EL(high resolution) with ROI tiles, the proposed method helps reduce not only the computational complexity on the decoder side but also the network bandwidth.


    4. [Communication] Real-time 360 Degree Video Streaming over Millimeter Wave Communication (802.11ad 60GHz)

    As a part of MCSL’s MVP (Merciless Video Processing) research, this study provides an adaptive scheme of real-time 360-degree video streaming over millimeter wave (mmWave) communications such as 60GHz 802.11ad. Its implementation consists of two parts: (i) 360-degree video streaming with PC offloading over mmWave; (ii) 360-degree video decoding, post-processing and displaying by using scalable high-efficiency video coding (SHVC) on mobile VR device.
    To break the performance limitation of Mobile VR device, this research uses PC offloading as well as 360-degree video pre/post-processing. In addition, because the characteristics of mmWave communications are quite different with normal Wi-Fi networks in bandwidth and packet error rates, this research is also trying to find the best way to increase QoS/QoE when user’s head-mounted display (HMD) moves. The research details and simple demo video clip can be found in the ‘Demo’ menu of this webpage.