Computer vision based articulated motion understanding / Suren Kumar

Kumar, Suren
Bib ID
vtls002096442
出版項
Ann Arbor, Michigan : ProQuest Information and learning, 2016.
稽核項
1 online resource (142 pages).
電子版
附註項
數位化論文典藏聯盟
預約人數:0
全部評等: 0
沒有紀錄。
 
 
 
03486ntm a2200517 i 4500
001
 
 
vtls002096442
003
 
 
VRT
005
 
 
20170512230800.0
006
 
 
m     o  d       
007
 
 
cr m|unnnup|||
008
 
 
170512s2016    miu     obm         eng d
020
$a 9781339480220 $q (ebook.)
035
$a (MiAaPQ)AAI10013476
039
9
$y 201705122308 $z VLOAD
040
$a MiAaPQ $b eng $e rda $c MiAaPQ $d TKU
100
1
$a Kumar, Suren, $e author
245
1
0
$a Computer vision based articulated motion understanding / $c Suren Kumar
264
1
$a Ann Arbor, Michigan : $b ProQuest Information and learning, $c 2016.
264
4
$c ©2016
300
$a 1 online resource (142 pages).
336
$a text $b txt $2 rdacontent
337
$a computer $b c $2 rdamedia
338
$a online resource $b cr $2 rdacarrier
347
$a text file $b PDF $2 rda
490
1
$a Dissertation Abstracts International ; $v 77-07B(E)
500
$a Source: Dissertation Abstracts International, Volume: 77-07(E), Section: B.
500
$a Adviser: Venkat N. Krovi; Jason J. Corso.
502
$a Thesis $b (Ph.D.)-- $c State University of New York at Buffalo, $d 2016
504
$a Includes bibliographical references
506
$a Access restricted to Tamkang University users.
520
$a Articulated objects have components that are joined together with a kinematic joint which allows them to move with respect to each other. As robots move from industry floors to indoor environment and work in collaboration with humans, it is vital for robots to understand the articulated structure of the environment. For example, to open a door, a robot needs to find a door in the environment, estimate the rotation axis and then take appropriate control action to open the door.
520
$a We consider the hierarchy of representation of articulated objects starting from a bounding box to pose and further to finding the articulation itself using only the vision sensors (RGB/RGBD cameras). For object tracking using bounding box representation, we propose Product of Tracking Experts Model to use various object trackers that focus on specific motion and appearance characteristics of the object. For pose estimation and tracking, we propose an observation model using Gaussian Processes which is combined with motion-continuity models to track object pose over time. We show connections to the human language output that can be extracted from each level of representational hierarchy. Towards the end we demonstrate how language itself can be used to help vision by exploiting the compositionality of language. The thesis will present various applications ranging from surveillance, surgical safety feedback to Simultaneous Localization and Mapping (SLAM) in dynamic environments.
533
$a Electronic reproduction. $b Ann Arbor, Mich. : $c ProQuest, $d 2016
538
$a Mode of access: World Wide Web
546
$a English
591
$a 數位化論文典藏聯盟 $b PQDT $c 淡江大學(2017)
653
$a Robotics.
653
$a Computer Science.
655
7
$a Electronic books. $2 local
700
1
$a Krovi, Venkat N., $e thesis advisor
700
1
$a Corso, Jason J., $e thesis advisor
710
2
$a ProQuest Information and Learning Co.
710
2
$a State University of New York at Buffalo. $b Mechanical and Aerospace Engineering.
830
0
$a Dissertation Abstracts International ; $v 77-07B(E).
856
4
1
$u http://info.lib.tku.edu.tw/ebook/redirect.asp?bibid=2096442 $z click for full text (PQDT)
999
$a VIRTUA00
沒有評論
叢書名
Dissertation Abstracts International ; 77-07B(E)
Dissertation Abstracts International ; 77-07B(E).
標題
摘要
Articulated objects have components that are joined together with a kinematic joint which allows them to move with respect to each other. As robots move from industry floors to indoor environment and work in collaboration with humans, it is vital for robots to understand the articulated structure of the environment. For example, to open a door, a robot needs to find a door in the environment, estimate the rotation axis and then take appropriate control action to open the door.
We consider the hierarchy of representation of articulated objects starting from a bounding box to pose and further to finding the articulation itself using only the vision sensors (RGB/RGBD cameras). For object tracking using bounding box representation, we propose Product of Tracking Experts Model to use various object trackers that focus on specific motion and appearance characteristics of the object. For pose estimation and tracking, we propose an observation model using Gaussian Processes which is combined with motion-continuity models to track object pose over time. We show connections to the human language output that can be extracted from each level of representational hierarchy. Towards the end we demonstrate how language itself can be used to help vision by exploiting the compositionality of language. The thesis will present various applications ranging from surveillance, surgical safety feedback to Simultaneous Localization and Mapping (SLAM) in dynamic environments.
附註
Source: Dissertation Abstracts International, Volume: 77-07(E), Section: B.
Adviser: Venkat N. Krovi; Jason J. Corso.
Thesis
Includes bibliographical references
English
數位化論文典藏聯盟
合著者
ISBN/ISSN
9781339480220