ros怎么建立bag地图ros gmapping bag

IEEE Robotics News吧,当娱乐新闻一样看了。&br&如果想深入一点,就把ICRA,IROS几个机器人会议近几年的论文的abstract都刷一刷。&br&还有Springer的Handbook of Robotics也可以看一看,是本不错的机器人领域科普书。
IEEE Robotics News吧,当娱乐新闻一样看了。 如果想深入一点,就把ICRA,IROS几个机器人会议近几年的论文的abstract都刷一刷。 还有Springer的Handbook of Robotics也可以看一看,是本不错的机器人领域科普书。
&img src=&/50/v2-84a8c29fed539ab2f9f4cbb2d48708d0_b.png& data-rawwidth=&724& data-rawheight=&376& class=&origin_image zh-lightbox-thumb& width=&724& data-original=&/50/v2-84a8c29fed539ab2f9f4cbb2d48708d0_r.png&&&p&写在正式介绍之前:&/p&&p&我将会写这一系列文章的初衷在于,&b&&i&国内外讯息的不对等&/i&&/b&,我们研究所的机器人部门有一流的研究环境、研究设施,充足的研究经费、高水平的同事以及在整个机器人研究圈子都有一定的声誉。每年部门也都处于缺人的状况,也会发布相应的PhD和Post Doc的Call,但奈何在国内传播都有限,而同时国内有很多优秀的本科、研究生毕业生毕业之后,希望从事相关机器人的研究,但也苦于对国外机器人实验室情况的不了解,错过了很多机会。这里希望能够借助知乎这样的平台,把我们机器人实验室的情况介绍给国内更多的人,也许这里不是你最佳的选择,但至少给了你一个额外选择的机会。&/p&&p&(私人:感谢浙江大学的熊蓉教授、爱丁堡大学李智彬博士对本人PhD申请的帮助)&/p&&hr&&p&算是王婆卖瓜,这一系列的文章将介绍本人所在的机器人实验室的基本情况(按照research line进行划分),大家可取所需:&/p&&ul&&li&感兴趣申请PhD或者Post Doc可以关注下具体的研究方向是否有兴趣(我会在文末附上具体链接,文中就不赘述专业内容);&/li&&li&希望了解人形机器人也可以进行阅读(以多图、科普叙述为主)。&/li&&/ul&&p&我现在是一名三年级的PhD学生,我现在所在的研究机构为:&/p&&p&Italian Institute of Technology, 中文可以译为意大利技术研究院,所处的机器人部门,平心而论,欧洲一流,世界上算是能够被同行认可的水平。&/p&&p&下面进入第一个Research Line,当然也是本人所在的部门。&/p&&hr&&p&&b&Research Line: &/b&&/p&&p&Humanoids and Human Centered Mechatronics(HHCM)&b&,&/b&&/p&&p&Italian Institute of Technology(IIT)&b&,&/b&&/p&&p&Genova, Italy.&/p&&img src=&/50/v2-d54bfc500f_b.jpg& data-caption=&& data-rawwidth=&300& data-rawheight=&78& class=&content_image& width=&300&&&p&我们部门翻译成中文可以说是“仿人的以及以人为中心的机电系统”,具体来说,我们部门有一群做硬件的汉子,以设计、制造仿人机器人驱动器、末端执行器以及集成复杂的人形机器人机电系统为主要研究对象;同事还有一波人在做具体的步行、操作控制以及机器视觉等,使人形机器人这个机电系统具备一定的运动能力和感知能力。&/p&&p&别的不多说,既然是制造人形机器人为主的,直接上图:&/p&&ul&&li&Walkman (大人型)&/li&&/ul&&img src=&/50/v2-2e1c4d10fba2b7a342f21_b.jpg& data-caption=&& data-rawwidth=&1920& data-rawheight=&1080& class=&origin_image zh-lightbox-thumb& width=&1920& data-original=&/50/v2-2e1c4d10fba2b7a342f21_r.jpg&&&img src=&/50/v2-1c8ef6f507fd5d607022a_b.jpg& data-caption=&& data-rawwidth=&724& data-rawheight=&375& class=&origin_image zh-lightbox-thumb& width=&724& data-original=&/50/v2-1c8ef6f507fd5d607022a_r.jpg&&&p&&a href=&/?target=https%3A///watch%3Fv%3DkZzwVwzAWME& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://www.&/span&&span class=&visible&&/watch?&/span&&span class=&invisible&&v=kZzwVwzAWME&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a& &/p&&p&&a href=&/?target=https%3A///watch%3Fv%3DtL_rKH4WVpQ& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://www.&/span&&span class=&visible&&/watch?&/span&&span class=&invisible&&v=tL_rKH4WVpQ&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a& (油管链接)&/p&&p&Walkman属于实验室建造的第一代大人型机器人,身高1.9,带电池重量130kg,参加过2015年的Darpa Challenge,因为是第一代大人型的缘故,存在一些硬件设计的问题和驱动器性能不足的情况,因此一定程度上限制了其运动性能,详细的可以参见以上Youtube附上的链接。&/p&&ul&&li&Coman (小人型)&/li&&/ul&&img src=&/50/v2-c3d5f168f31a0f99f3f778dd6d9c059c_b.jpg& data-caption=&& data-rawwidth=&1792& data-rawheight=&2516& class=&origin_image zh-lightbox-thumb& width=&1792& data-original=&/50/v2-c3d5f168f31a0f99f3f778dd6d9c059c_r.jpg&&&img src=&/50/v2-2d392c8e3c589d34117e02_b.jpg& data-caption=&& data-rawwidth=&728& data-rawheight=&374& class=&origin_image zh-lightbox-thumb& width=&728& data-original=&/50/v2-2d392c8e3c589d34117e02_r.jpg&&&img src=&/50/v2-303dd00e4d0ae04d3c978da87000ae6c_b.jpg& data-caption=&& data-rawwidth=&728& data-rawheight=&377& class=&origin_image zh-lightbox-thumb& width=&728& data-original=&/50/v2-303dd00e4d0ae04d3c978da87000ae6c_r.jpg&&&p&&a href=&/?target=https%3A///watch%3Fv%3DO5YcrT9rJNI%26pbjreload%3D10& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://www.&/span&&span class=&visible&&/watch?&/span&&span class=&invisible&&v=O5YcrT9rJNI&pbjreload=10&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&/p&&p&Coman属于实验室最早的一代小人型机器人,全身SEA,也是我现在的老板亲自操刀设计,算是老板的成名作(后面会附上老板的Google Scholar相关信息),关于Coman的相关视频和应用较多,Youtube上搜索“Coman iit”就会有大量的视频Demo.&/p&&ul&&li&Centauro (人马型) &/li&&/ul&&img src=&/50/v2-209bde586dbdcca6_b.jpg& data-caption=&& data-rawwidth=&960& data-rawheight=&1280& class=&origin_image zh-lightbox-thumb& width=&960& data-original=&/50/v2-209bde586dbdcca6_r.jpg&&&img src=&/50/v2-5b2d96bfee3_b.jpg& data-caption=&& data-rawwidth=&4160& data-rawheight=&3120& class=&origin_image zh-lightbox-thumb& width=&4160& data-original=&/50/v2-5b2d96bfee3_r.jpg&&&p& 这个Centauro机器人目前是实验室主要项目,相关的机械设计已经完成,还在做装配和相关电路的调试。Centauro这个Project也是本人参与的,第二张图就是本人设计的欠驱动机械手,将会装配在Centauro机器人的左手进行侧重powerful的抓取,另外右手将会装配德国的Schunk Hand进行侧重precise的抓取。Centauro机器人一个特点就是在每条腿的末端,都安装了四个主动轮,因此能够针对不同地形切换“轮式-四足”,一定程度提高运动效率。&/p&&ul&&li&Cogimon (大人型)&/li&&/ul&&p&关于Cogimon,是部门前不久刚接到的新项目,旨在Walkman和Coman的设计基础上,做新一代全身力控,侧重多人形机器人协作的项目(意味着将不止建造1台人形机器人),因为正处在设计阶段,所以就没有图片,附相关项目网址链接:&a href=&/?target=https%3A//cogimon.eu/& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://&/span&&span class=&visible&&cogimon.eu/&/span&&span class=&invisible&&&/span&&i class=&icon-external&&&/i&&/a&&/p&&p&&br&&/p&&p&最后再来谈谈老板,以下摘自google scholar:&/p&&img src=&/50/v2-e1cfd5bd6a575a222e27c_b.jpg& data-caption=&& data-rawwidth=&1241& data-rawheight=&809& class=&origin_image zh-lightbox-thumb& width=&1241& data-original=&/50/v2-e1cfd5bd6a575a222e27c_r.jpg&&&p&学术能力算的过去,是主流机器人学术会议IROS、ICRA及IEEE/ASME Transcation on Mechatronics期刊的对应领域的Editor。&/p&&p&再来谈谈人品,超级好,八个字概括:“严于律己、宽以待人”,在他手下干活很舒服,非典型的希腊人,勤奋异常,经常在深夜收到老板洋洋洒洒回复的邮件,用自身实际的勤奋行动来PUSH你。&/p&&hr&&p&因为是我自己所在的部门,所以这里要啰嗦一些,我们部门最近几年招收的PhD和Post Doc所做的具体方向主要如下:&/p&&ul&&li&机器人驱动器、执行器设计;&/li&&li&SEA力矩控制相关;&/li&&li&Biped Robot步行控制相关;&/li&&li&机器视觉相关; &/li&&/ul&&p&最后附上整个Research Line详细介绍的链接:&a href=&/?target=https%3A//www.iit.it/research/lines/humanoids-human-centered-mechatronics& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Humanoids & Human Centered Mechatronics&i class=&icon-external&&&/i&&/a&&/p&&p&&/p&
写在正式介绍之前:我将会写这一系列文章的初衷在于,国内外讯息的不对等,我们研究所的机器人部门有一流的研究环境、研究设施,充足的研究经费、高水平的同事以及在整个机器人研究圈子都有一定的声誉。每年部门也都处于缺人的状况,也会发布相应的PhD和Pos…
&img src=&/50/v2-cb42f70fcd2ee26c_b.png& data-rawwidth=&1080& data-rawheight=&640& class=&origin_image zh-lightbox-thumb& width=&1080& data-original=&/50/v2-cb42f70fcd2ee26c_r.png&&六个月前写了一篇机器人抓取的专栏文章,介绍了一下机器人抓取的基本研究内容和方法,文章链接:&a href=&/p/?refer=cobot& class=&internal&&机器人抓取&/a&。当时就说了未完待续,一直想更详细的介绍下机器人抓取。在2017年的开始两天,终于可以抽出时间来稍微整理一下这方面的内容。&p&完整的PPT下载链接:&a href=&/?target=https%3A///s/1febtj/Miao_Talk_2016_new.pdf%3Fdl%3D0& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://www.&/span&&span class=&visible&&/s/1febt3034&/span&&span class=&invisible&&ye579j/Miao_Talk_2016_new.pdf?dl=0&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&br&&/p&&p&&b&一:什么是机器人抓取?&/b&&/p&&p&如下图所示,给定一个物体和手,机器人抓取的基础问题分为三个:(1)怎么样去抓  (2)怎么样去控制 (3)怎么样去操作; 按照难度,这三个问题基本是依次递增的。&/p&&img src=&/v2-fe74594f0bdc1b0f4a081_b.png& data-rawwidth=&734& data-rawheight=&411& class=&origin_image zh-lightbox-thumb& width=&734& data-original=&/v2-fe74594f0bdc1b0f4a081_r.png&&&p&(1)怎么样去抓?(Grasp Planning)&/p&&p&这个问题基本是一群大牛一开始就研究的问题,包括Salisbury,Mason,Cutkosky,Khatib 等等。大家研究来研究去,就是想知道:给定一个物体,给定一个任务,给定一个手,这个手该怎么去抓这个物体才是最好的?这个方向的最著名的结果算是force-closure了, 在很长时间基本相当于控制里面的稳定性的重要性了。我在读博士前两年,基本也是做这个,当时带我的博士后Sahar在这个方向做得相当不错,不过她转行搞金融去了。最近这个方向基本就是往与Learning结合的套路上走,可以参考Sergey Levine在Google时的工作。&/p&&p&(2)怎么样去控制?(Grasp Control)&/p&&p&这个问题就是研究力控,包括手指末端的力控,触觉控制等,刚度控制,阻抗控制等等。很长一段时间,大家都在试图计算什么样的手指抓取力才是最优的(话说当年开始读硕士时,第一个给我邮寄纸质论文的Imin Kao 教授就是做抓取的刚度控制的)。这里面以Martin Buss和李泽湘老师组的工作最为著名,将一个非线性优化问题转化成一个线性矩阵不等式问题,基本在几十毫秒左右可以得到优化结果。最近的这个方面的最好的工作应该算是DLR出来的object-level impedance control(IJRR)了(文章第一作者Wimbock也转行了)。现在还坚守在这个方向的主要就是几个日本教授了,包括我的合作者Kenji。话说Kenji的老板Arimoto教授,退休后才开始搞机器人抓取的,还出了本书。他属于上古大神级别,现在很少有人知道,IROS 2016他的生日聚会也相当高端,可以感受下(&a href=&/?target=http%3A//www2.mae.cuhk.edu.hk/%7Earmi06/speakers.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&2006 International Symposium on Advanced Robotics and Machine Intelligence&i class=&icon-external&&&/i&&/a&)。&/p&&br&&p&(3)怎么样去操作?(Dexterous Manipulation)&/p&&p&其中最后一个问题的dexterous manipulation(灵巧操作),已经很多年没有什么好的进展了,现在也很少有人去碰这个问题了。Cutkosky的博士论文是搞这个,后来还搞了几年,再后来把实验室名字都换了,算是转方向了。Peter Allen 组以前有个博士后搞这个,后来好像是潜水挂了,很是遗憾。最近的soft robotics火了,这个方向好像有了新的转机,Oliver Broc也开始做这个方向了。&/p&&p&&b&二: 为什么机器人抓取重要?&/b&&/p&&p&如下图所示,机器人抓取的研究涉及到很多方面,包括机械,控制,计算机,人工智能等等。很多时候,机器人抓取是一个很好的最小研究例子(minimal example),来支撑各个方向的研究。其次,机器人抓取中的研究内容,特别是灵巧性和交互性,能够对其他很多相关的研究起到作用,下面的第二张图就说明了这一点。另外,抓取也是机器人走进真实世界必不可少的功能,手对人类是如此重要,我们当然希望赋予机器人同样的功能。&/p&&p&&img src=&/v2-a4e822bff81c064aaff09b2f08920e9d_b.png& data-rawwidth=&843& data-rawheight=&505& class=&origin_image zh-lightbox-thumb& width=&843& data-original=&/v2-a4e822bff81c064aaff09b2f08920e9d_r.png&&&img src=&/v2-b342ffd40bf202bc7686bc1_b.png& data-rawwidth=&844& data-rawheight=&482& class=&origin_image zh-lightbox-thumb& width=&844& data-original=&/v2-b342ffd40bf202bc7686bc1_r.png&&&b&三:为什么机器人抓取很难?&/b&&/p&&p&抓取对我们而言是如此简单,所以我们很有可能认为对机器人也很简单,但实际上是相当难的。如果看过各种或者参加过各种抓取比赛,应该是生无可恋了,机器人抓取的底线比你想象的肯定要低。但是很多paper中的数据都是在制定的非常不真实的条件下,给人一种很高的成功率的感觉。记得我在我博士论文里面给了个30%左右的成功率,有个答辩委员问我为什么这么低,这不符合常理。但是这就是现实。&/p&&p&现实世界有太多的不确定性,因为不确定性,我们在抓取中用到的模型基本上都是不准确的,甚至是错的。而我们又没有足够好的传感器,可以实时给我们反馈真实的状态。更糟糕的是,我们连一个好点的手也没用,很难准确的去控制机器人手到我们想要的状态。未来很长一段时间,怎么样处理这些不确定性,将会是机器人抓取的一个热点方向。其实这个方向也很热,基本每年都有相关的workshop。这里推广下我们2017年的ICRA workshop: &a href=&/?target=https%3A///view/somca& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://&/span&&span class=&visible&&/view/s&/span&&span class=&invisible&&omca&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&/p&&p&&b&四:未来的方向?&/b&&/p&&p&既然还有这么多的未解决的问题,那么未来的方向在哪里?我自己总结了几个点:更好的理解不确定性,更多的利用接触,更灵巧的设计,更稳定的传感器。这几个方向的阐述,也是很大的一个坑,将来一个个添补吧。(我博士毕业后,如果去美国做博士后,就是研究其中的第一点和第二点)。如果这几个点做好了,机器人抓取离大面积的现实应用也不会太远。&/p&&p&最近我们在公司实现的&a href=&/?target=http%3A//mp./s/G2cAQlDCtqCOumyTQ0B1Ag& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&无序抓取(random bin picking)&i class=&icon-external&&&/i&&/a&,如果对机器人抓取,力控,机器人学习,嵌入式系统了解并感兴趣,欢迎联系。&/p&&p&另外,我们也长期招聘优秀实习生, 可以做第一手的研究和推荐出国。&/p&&p&个人研究主页:&a href=&/?target=https%3A///view/miaoli& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&https://&/span&&span class=&visible&&/view/m&/span&&span class=&invisible&&iaoli&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&/p&
六个月前写了一篇机器人抓取的专栏文章,介绍了一下机器人抓取的基本研究内容和方法,文章链接:。当时就说了未完待续,一直想更详细的介绍下机器人抓取。在2017年的开始两天,终于可以抽出时间来稍微整理一下这方面的内容。完整的PPT下载链接:…
比较关注双足机器人和深度强化学习。觉得有意思的是斯坦福feifei组做的用adversarial neural network来做强化学习,加强最后得到结果的鲁棒性&a href=&///?target=http%3A//vision.stanford.edu/pdf/mandlekar2017iros.pdf& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&vision.stanford.edu/pdf&/span&&span class=&invisible&&/mandlekar2017iros.pdf&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a& 。
比较关注双足机器人和深度强化学习。觉得有意思的是斯坦福feifei组做的用adversarial neural network来做强化学习,加强最后得到结果的鲁棒性 。
&p&下图是今年IROS 2017文章KEYWORD的统计,发展趋势可见一斑。&/p&&img src=&/v2-ddef232229acb48d8ceb8f1_b.jpg& data-rawwidth=&690& data-rawheight=&518& class=&origin_image zh-lightbox-thumb& width=&690& data-original=&/v2-ddef232229acb48d8ceb8f1_r.jpg&&&p&因为本人是做仿人机器人及其底驱动器和执行器设计的,这个相关的领域有如下几篇文章值得关注:&/p&&ol&&li&第一篇是今年拿到Best Paper Award来自UC Berkeley的:&/li&&/ol&&p&&b&&i&Repetitive extreme-acceleration (14-g) spatial jumping with Salto-1P&/i&&/b&&/p&&img src=&/v2-b8a0db47aafed8f7a303e8d_b.jpg& data-rawwidth=&515& data-rawheight=&708& class=&origin_image zh-lightbox-thumb& width=&515& data-original=&/v2-b8a0db47aafed8f7a303e8d_r.jpg&&&p&简而言之是结构及其紧凑、能够重复、大加速度跳跃的弹跳机器人。就今年的iros来看,体积小、质量小,结构紧凑的弹跳机器人的设计应用及其控制,很多美国的名校实验室都在做。&/p&&p&2. 还有就是来自日本HONDA的文章:&/p&&p&&b&&i&Development of Experimental Legged Robot for Inspection and Disaster Response in Plants&/i&&/b&&/p&&img src=&/v2-55e32a93eee9c675f4a676_b.jpg& data-rawwidth=&307& data-rawheight=&145& class=&content_image& width=&307&&&p&HONDA的这个仿人机器人在作者看来是除了BDI ATLAS2之外看到现在最好的双足人型机器人了,DEMO看了很是震撼:独特的膝关节设计,在上下楼梯,不平整路面,包括弯腰走路上表现出来的良好性能,完全展现了HONDA强大的技术实力积累。&/p&&p&3. 这篇来自iit的关于SEA驱动器输出力矩带宽的文章也值得一看:&/p&&p&&b&&i&What is the Torque Bandwidth of this Actuator?&/i&&/b&&/p&&img src=&/v2-4ff34cb3aba85_b.jpg& data-rawwidth=&303& data-rawheight=&166& class=&content_image& width=&303&&&p&这篇文章提出了一种从驱动器物理本身性能上确定力矩输出带宽的方法(与实际控制器无关),对于制定驱动器选型的标准有一定的意义。&/p&&p&-------------------------------------------------------------------------&/p&&p&仿人机器人底层驱动器及控制领域,美国的高校研究所在跟随BDI,做结构紧凑小巧基于直驱电机的机器人,欧洲和日本还是在做相对传统基于sea及相关的机器人应用。&/p&&p&机器人是一个复杂多领域交汇的学科,对于其他领域值得关注的PAPER,还是需要不同领域的research进行补充。&/p&
下图是今年IROS 2017文章KEYWORD的统计,发展趋势可见一斑。因为本人是做仿人机器人及其底驱动器和执行器设计的,这个相关的领域有如下几篇文章值得关注:第一篇是今年拿到Best Paper Award来自UC Berkeley的:Repetitive extreme-acceleration (14-g) spat…
对于机器人来说,从控制的层次来区分,可以分为基于电机角速度层次的控制 (velocity level control),以及在加速度层次的控制 (acceleration control)。目前的工业机器人,大部分都提供角速度层次的控制,有角速度和位置的反馈,控制起来也比较方便,但是基于角速度的控制和力矩控制相比,控制信号有时候不连续,会导致控制的轨迹发生抖动。基于力矩控制,也就是在速度和位置反馈的基础上加入了电流反馈,控制效果更加精准并具有连续性,缺点是需要对机器人的模型(基于拉格朗日和牛顿模型)进行精准建模。当前,除非厂家明确提供了机器人的各种参数 (M, C, G模型),一般用户很难有效操作。当然,基于机器人模型的特性,如LIP(linear in parameter),衍生出了一些利用自适神经网络以及模糊控制方法,但是这些算法的稳定性很值得商榷,毕竟机器人控制除了控制精度和平稳性之外,首要目标是安全,自适应的过程如果出现参数的过大超调,很容易对人或者机器人造成伤害。考虑到机器人模型的不确定性,则有H2和Hinfinity的鲁棒控制。总之,反正是控制论里面有的,理论上都可以用在机器人上面,但是到底好不好使,和PID相比有啥优势,只能呵呵了。&br&&br&机器人控制如果从是否与环境有直接接触来进行划分,分为无接触的控制以及阻抗控制(Impedance Control)和力和位置混合控制(hybrid force and position控制)。阻抗控制的核心思想,用通俗的话来说就是让机器人模拟一个质量、弹簧、振子系统,实现柔顺(compliance)。hybrid force and position 控制则是通过将力和位置控制进行decouple (解耦)分别实现控制。按照我的理解,阻抗控制和混合控制可以归为机器人的外环控制,内环则是上面提到的速度层次(velocity level control),以及加速度层次的控制 (acceleration control)。内环响应实现快速跟踪,外环则是优化内在和外在的控制指标。&br&&br&另外,当前也有基于机器学习的控制,比如基于增强学习(reinforcement learning)、自适应最优控制(optimal control)、extreme seeking control来优化PID参数等等。这些控制同样也是基于外环。&br&&br&目前比较牛逼的一个机器人控制,是Berkeley Peter Abbeel他们做的将deep learning和optimal control 结合来做机器人高维数据输入的控制(比如输入时图像,输出是控制信号)。网上有他们家做的PR2机器人叠毛巾的视频,感觉屌屌的。有兴趣的同学可以读下他们组的文章 (搜索guided policy search)。&br&&br&最后,推荐几本书吧,&br&&br&J.J. Craig的《Introduction to Robotics: Mechanics and Control 》&br&Slotine 的 《Robot Analysis and Control》&br&&a href=&///?target=http%3A///Springer-Handbook-Robotics-Bruno-Siciliano/dp/X& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Springer Handbook of Robotics: Bruno Siciliano, Oussama Khatib: 4: : Books&i class=&icon-external&&&/i&&/a&&br&&br&视频可以看oussama的公开课。&br&&br&转载请注明出处
对于机器人来说,从控制的层次来区分,可以分为基于电机角速度层次的控制 (velocity level control),以及在加速度层次的控制 (acceleration control)。目前的工业机器人,大部分都提供角速度层次的控制,有角速度和位置的反馈,控制起来也比较方便,但…
&p&运动学 + 静力学 +(时域)动力学+ (频域)动力学 = 动力学. &/p&&p&可以说, &/p&&p&机器人的运动学求解, 假设所有的物体都是刚体. 只用几何模型就可以解决. &/p&&p&静力学,则开始出现了力, 出现了变形, 出现了受力之后机器的各种强度 刚度K 等问题. &/p&&p&(时域)动力学, 是要考虑 F=ma造成的惯性, 在低频通常0.1s分析周期或更慢, 机器人的加减速都是需要时间的, 电机也是, 齿轮是有背隙的, 轴都是缓慢加速的.... &/p&&p&(频域)动力学, 是要考虑振动带来的效果, 即使静态是稳定的, 机器人有可能振动失效, 或者造成疲劳或磨损的失效. &/p&&p&&br&&/p&&p&以上每一层的难度, 大概翻3番, 所以一个完整的动力学模型和分析过程, 比一个完整的运动学模型和分析, 大概工作量是27倍(说100倍也不夸张). &/p&&p&目前国内机器人企业, 包括比较有名的企业在内, 很多博士在一起,也无法确保机器人在任何工况下不振动, -- 这一点和宝马车高速下也不振有相似之处. &/p&&p&结论; &/p&&p&运动学简单, 是基础; 动力学是深入, 复杂; 两者的控制, 最终都是以动力学为主, 不考虑动力学的控制都是玩具. &/p&&p&&/p&&p&&/p&&p&&/p&
运动学 + 静力学 +(时域)动力学+ (频域)动力学 = 动力学. 可以说, 机器人的运动学求解, 假设所有的物体都是刚体. 只用几何模型就可以解决. 静力学,则开始出现了力, 出现了变形, 出现了受力之后机器的各种强度 刚度K 等问题. (时域)动力学, 是要考虑 F=ma造…
&p&大概两年前,我刚博士开题。&/p&&p&开题的时候,一个把「运动规划」和「轨迹规划」搞混的老师问到:「运动规划不是研究很成熟了吗,已经不算研究热点了吧?」&/p&&p&为了说服老板允许我继续搞「运动规划」,我特地跑去整理了一下机器人方向的研究热点。&/p&&p&&b&(当然,毕竟两年过去了,时效性可能会有问题,请各位谨慎参考)&/b&&/p&&p&&br&&/p&&p&通过谷歌学术,找到 Robotics 话题下三个影响力最大的会议/期刊:&/p&&img src=&/v2-d8fee9f00b97bdf57bbbf4e52e16156b_b.png& data-rawwidth=&880& data-rawheight=&453& class=&origin_image zh-lightbox-thumb& width=&880& data-original=&/v2-d8fee9f00b97bdf57bbbf4e52e16156b_r.png&&&p&然后,我就对这三个期刊进行&b&关键词统计&/b&。&/p&&p&&b&首先是 ICRA 2015&/b&,官网有给出每个关键词对应的论文,所以,我就把所有关键词的文章数量进行了统计,如下:&/p&&img src=&/v2-9649fbbcf1bd4cfd81832f3_b.png& data-rawwidth=&922& data-rawheight=&558& class=&origin_image zh-lightbox-thumb& width=&922& data-original=&/v2-9649fbbcf1bd4cfd81832f3_r.png&&&p&我知道上面这个柱状图你们看不清,所以我就只把最前面几个文章数比较多的截出来:&/p&&img src=&/v2-215fdf85c845fe82a797a93caacaf1ed_b.png& data-rawwidth=&848& data-rawheight=&534& class=&origin_image zh-lightbox-thumb& width=&848& data-original=&/v2-215fdf85c845fe82a797a93caacaf1ed_r.png&&&p&其次就是机器人领域的&b&神级期刊 IJRR&/b&(The International Journal of Robotics Research),我也是把它2015年所有的文章关键词进行了统计。当然,由于 IJRR 不像 ICRA 是官方指定的关键词,所以我对一些意思接近的关键词进行了合并处理,结果如下:&/p&&img src=&/v2-94670faab91_b.png& data-rawwidth=&858& data-rawheight=&519& class=&origin_image zh-lightbox-thumb& width=&858& data-original=&/v2-94670faab91_r.png&&&p&&br&&/p&&p&最后,就是机器人领域另一个&b&顶级期刊 TRO&/b& (IEEE Transactions on Robotics),如下:&/p&&img src=&/v2-5bddac663_b.png& data-rawwidth=&772& data-rawheight=&511& class=&origin_image zh-lightbox-thumb& width=&772& data-original=&/v2-5bddac663_r.png&&&p&&br&&/p&&p&当然,一般做调研,国内国外是要一起做的。要了解国内学术的发展情况,可以直接利用 CNKI 的「学术趋势搜索」工具:&a href=&///?target=http%3A//ki.net/TrendSearch/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CNKI学术趋势&i class=&icon-external&&&/i&&/a&&/p&&p&它能根据关键词直接返回研究发展情况,我随便放几个:&/p&&img src=&/v2-efda10f0bbe2359d4fcad_b.png& data-rawwidth=&832& data-rawheight=&292& class=&origin_image zh-lightbox-thumb& width=&832& data-original=&/v2-efda10f0bbe2359d4fcad_r.png&&&p&&br&&/p&&img src=&/v2-1e9c2b950bd9d760cc75ea_b.png& data-rawwidth=&831& data-rawheight=&291& class=&origin_image zh-lightbox-thumb& width=&831& data-original=&/v2-1e9c2b950bd9d760cc75ea_r.png&&&p&&br&&/p&&img src=&/v2-cc487e0af97_b.png& data-rawwidth=&824& data-rawheight=&284& class=&origin_image zh-lightbox-thumb& width=&824& data-original=&/v2-cc487e0af97_r.png&&&p&&br&&/p&&img src=&/v2-ae805d7c7_b.png& data-rawwidth=&830& data-rawheight=&284& class=&origin_image zh-lightbox-thumb& width=&830& data-original=&/v2-ae805d7c7_r.png&&&p&&br&&/p&&p&最后,大家根据前面的统计情况,就能大概知道哪些方向是 2015 年机器人领域的研究热点了吧。&/p&&p&&br&&/p&&p&当然,对于我而言,我也向老板展示了「运动规划很热门的,而且,你看,&b&我们实验室很多研究方向都属于热门方向呢&/b&!」&/p&&p&老板听了后半句,表示很开心,于是我也就顺利入坑「运动规划」了。&/p&
大概两年前,我刚博士开题。开题的时候,一个把「运动规划」和「轨迹规划」搞混的老师问到:「运动规划不是研究很成熟了吗,已经不算研究热点了吧?」为了说服老板允许我继续搞「运动规划」,我特地跑去整理了一下机器人方向的研究热点。(当然,毕竟两年过…
&img src=&/50/v2-cdc6c6fbfc73df60d402c_b.jpg& data-rawwidth=&1600& data-rawheight=&1067& class=&origin_image zh-lightbox-thumb& width=&1600& data-original=&/50/v2-cdc6c6fbfc73df60d402c_r.jpg&&&h2&目录&/h2&&ul&&li&课程&/li&&li&论文&/li&&li&实验室&/li&&li&数据集&/li&&li&开源项目&/li&&/ul&&p&&br&&/p&&h2&课程&/h2&&ul&&li&[Udacity] &a href=&/?target=https%3A///course/self-driving-car-engineer-nanodegree--nd013& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Self-Driving Car Nanodegree Program&i class=&icon-external&&&/i&&/a& - teaches the skills and techniques used by self-driving car teams. Program syllabus can be found &a href=&/?target=https%3A///self-driving-cars/term-1-in-depth-on-udacitys-self-driving-car-curriculum-ffcf46af0c08%23.bfgw9uxd9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&here&i class=&icon-external&&&/i&&/a&.&/li&&li&[University of Toronto] &a href=&/?target=http%3A//www.cs.toronto.edu/%7Eurtasun/courses/CSC2541/CSC2541_Winter16.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CSC2541 Visual Perception for Autonomous Driving&i class=&icon-external&&&/i&&/a& - A graduate course in visual perception for autonomous driving. The class briefly covers topics in localization, ego-motion estimaton, free-space estimation, visual recognition (classification, detection, segmentation).&/li&&li&[INRIA] &a href=&/?target=https%3A//www.fun-mooc.fr/courses/inria/41005S02/session02/about%3Futm_source%3Dmooc-list& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Mobile Robots and Autonomous Vehicles&i class=&icon-external&&&/i&&/a& - Introduces the key concepts required to program mobile robots and autonomous vehicles. The course presents both formal and algorithmic tools, and for its last week's topics (behavior modeling and learning), it will also provide realistic examples and programming exercises in Python.&/li&&li&[Universty of Glasgow] &a href=&/?target=http%3A//www.gla.ac.uk/coursecatalogue/course/%3Fcode%3DENG5017& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ENG5017 Autonomous Vehicle Guidance Systems&i class=&icon-external&&&/i&&/a& - Introduces the concepts behind autonomous vehicle guidance and coordination and enables students to design and implement guidance strategies for vehicles incorporating planning, optimising and reacting elements.&/li&&li&[David Silver - Udacity] &a href=&/?target=https%3A///self-driving-cars/how-to-land-an-autonomous-vehicle-job-coursework-e7acc2bfe740%23.j5b2kwbso& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How to Land An Autonomous Vehicle Job: Coursework&i class=&icon-external&&&/i&&/a&David Silver, from Udacity, reviews his coursework for landing a job in self-driving cars coming from a Software Engineering background.&/li&&li&[Stanford] &a href=&/?target=http%3A//stanford.edu/%7Ecpiech/cs221/index.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CS221 Artificial Intelligence: Principles and Techniques&i class=&icon-external&&&/i&&/a& - Contains a simple self-driving project and simulator.&/li&&li&[MIT] &a href=&/?target=http%3A//selfdrivingcars.mit.edu/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&6.S094: Deep Learning for Self-Driving Cars&i class=&icon-external&&&/i&&/a& - an introduction to the practice of deep learning through the applied theme of building a self-driving car.&/li&&li&[MIT] &a href=&/?target=http%3A//duckietown.mit.edu/index.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&2.166 Duckietown&i class=&icon-external&&&/i&&/a& - Class about the science of autonomy at the graduate level. This is a hands-on, project-focused course focusing on self-driving vehicles and high-level autonomy. The problem: Design the Autonomous Robo-Taxis System for the City of Duckietown.&/li&&/ul&&h2&论文&/h2&&h2&综合&/h2&&ul&&li&[2016] &i&Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&An Empirical Evaluation of Deep Learning on Highway Driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Self-Driving Vehicles: The Challenges and Opportunities Ahead&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D2823464& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Making Bertha Drive - An Autonomous Journey on a Historic Route&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Making-Bertha-Drive-An-Autonomous-Journey-on-a-Ziegler-Bender/ec26d7b1cbd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Towards Autonomous Vehicles&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Towards-Autonomous-Vehicles-Schwarz-Thomas/bcad21f00dab2b7fa8f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Towards a viable autonomous driving research platform&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Towards-a-viable-autonomous-driving-research-Wei-Snider/da5cee7a6eb817bbbfbd8b7122359& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&An ontology-based model to determine the automation level of an automated vehicle for co-driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/An-ontology-based-model-to-determine-the-Pollard-Morignot/259166dfe15adf229fcdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Autonomous Vehicle Navigation by Building 3d Map and by Detecting Human Trajectory Using Lidar&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Vehicle-Navigation-by-Building-3d-Map-Kagami-Thompson/81bd819d032b6ce0bc0be& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Autonomous Ground Vehicles - Concepts and a Path to the Future&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Ground-Vehicles-Concepts-and-a-Path-to-Luettel-Himmelsbach/5e8d51a1f6ba313a38a35af414a00bcfd3b5c0ae& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Experimental Evaluation of Autonomous Driving Based on Visual Memory and Image-Based Visual Servoing&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Experimental-Evaluation-of-Autonomous-Driving-Diosi-Segvic/2aeb9aa42e8e9ec9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Learning to Drive: Perception for Autonomous Cars&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-to-Drive-Perception-for-Autonomous-Cars-Stavens-Thrun/be25d7bff3b5928adf6c0a7ff80997& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2010] &i&Toward robotic cars&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D1721679& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Autonomous Driving in Traffic: Boss and the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Driving-in-Traffic-Boss-and-the-Urban-Urmson-Baker/2bcb9dc5dbe& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Mapping, navigation, and learning for off-road traversal&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Mapping-navigation-and-learning-for-off-road-Konolige-Agrawal/57db386dfce4fced2160& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Autonomous Driving in Urban Environments: Boss and the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Driving-in-Urban-Environments-Boss-and-Urmson-Anhalt/1c0fb6b1bbfde0f9babcce2bd3bc5bd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Caroline: An autonomously driving vehicle for urban environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Caroline-An-autonomously-driving-vehicle-for-urban-Rauskolb-Berger/08f4efc78bdc672b17edd5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Design of an Urban Driverless Ground Vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Design-of-an-Urban-Driverless-Ground-Vehicle-Benenson-Parent/852a672c3d4a2fca3ff7b215d9c096b0be54feb7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Little Ben: The Ben Franklin Racing Team's Entry in the 2007 DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Little-Ben-The-Ben-Franklin-Racing-Team-s-Entry-in-Bohren-Foote/b6d5e01cdb7b0dda6c36f121c573f0& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Odin: Team VictorTango's Entry in the DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Odin-Team-VictorTango-s-Entry-in-the-DARPA-Urban-Reinholtz-Hong/aaeaa58bedf6fa9b4f55f48cf26209& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Robosemantics: How Stanley the Volkswagen Represents the World&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Robosemantics-How-Stanley-the-Volkswagen-Parisien-Thagard/9fabfe3da37591ca& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Team AnnieWAY's autonomous system for the 2007 DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-AnnieWAY-s-Autonomous-System-Stiller-Kammel/5d3cce7c77df3af94d57c9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&The MIT-Cornell collision and why it happened&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-MIT-Cornell-collision-and-why-it-happened-Fletcher-Teller/0df4f3efac& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Self-Driving Cars - An AI-Robotics Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Self-Driving-Cars-An-AI-Robotics-Challenge-Thrun/31d17c77d2ea18f71dfd3030caa94& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&2007 DARPA Urban Challenge: The Ben Franklin Racing Team Team B156 Technical Paper&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/2007-Darpa-Urban-Challenge-the-Ben-Franklin-Racing-Franklin-Lee/510b0fa02d6bddf197ba& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Team Mit Urban Challenge Technical Report&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-Mit-Urban-Challenge-Technical-Report-Leonard-Barrett/6ac15ed077dcc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&DARPA Urban Challenge Technical Report Austin Robot Technology&/i& [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Darpa-Urban-Challenge-Technical-Report-Executive-Technology-Tuttle/37e78b1bd135df5c5a1fcbf2a8debd260d28a55c& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Spirit of Berlin: an Autonomous Car for the Darpa Urban Challenge Hardware and Software Architecture&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Spirit-of-Berlin-an-Autonomous-Car-for-the-Darpa-Berlin-Rojo/8c96cbc752dfcdeca1fbbf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Team Case and the 2007 Darpa Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-Case-and-the-2007-Darpa-Urban-Challenge-Newman-Lead/e68c745b7807e77ccf67fea325aeeb& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2006] &i&A Personal Account of the Development of Stanley, the Robot That Won the DARPA Grand Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/A-Personal-Account-of-the-Development-of-Stanley-Thrun/74a4de58be068d2dc38bb31cf54c3c49bdc0d4e4& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2006] &i&Stanley: The robot that won the DARPA Grand Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stanley-The-robot-that-won-the-DARPA-Grand-Thrun-Montemerlo/b17fa2ebe7bde0a1b8ebc00ea07f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&激光雷达与点云&/h2&&ul&&li&[2017] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A///charlesq34/pointnet& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&github&i class=&icon-external&&&/i&&/a&]&/li&&li&[2017] &a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&[] 3D Fully Convolutional Network for Vehicle Detection in Point Cloud&i class=&icon-external&&&/i&&/a&&/li&&li&[2017] &a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&[] Fast LIDAR-based Road Detection Using Fully Convolutional Neural Networks&i class=&icon-external&&&/i&&/a&&/li&&li&[2016] Motion-based Detection and Tracking in 3D LiDAR Scans [&a href=&/?target=http%3A//rmatik.uni-freiburg.de/publications/papers/dewan16icra.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A//youtu.be/cyufiAyTLE0& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&youtube&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] Lidar-based Methods for Tracking and Identification [&a href=&/?target=http%3A//publications.lib.chalmers.se/records/fulltext/972.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A//youtu.be/_Mhgm2BXdFI& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&youtube&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] Efficient L-shape fitting of laser scanner data for vehicle pose estimation [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/7274568/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/7007125/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Road Detection Using High Resolution LIDAR&i class=&icon-external&&&/i&&/a&&/li&&li&[2012] LIDAR-based 3D Object Perception [&a href=&/p//www.cs.princeton.edu/courses/archive/spring11/cos598A/pdfs/Himmelsbach08.pdf& class=&internal&&ref&/a&]&/li&&li&[2011] Radar/Lidar sensor fusion for car-following on highways [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/6144918/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &a href=&/?target=http%3A///lidar/hdlpressroom/pdf/Articles/Real-time%2520Road%2520Detection%2520in%2Point%2520Clouds%2520using%2520Four%2520Directions%2520Scan%2520Line%2520Gradient%2520Criterion.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Real-time road detection in 3d point clouds using four directions scan line gradient criterion&i class=&icon-external&&&/i&&/a&&/li&&li&[2006] Real-time Pedestrian Detection Using LIDAR and Convolutional Neural Networks [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/1689630/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&定位与测绘&/h2&&ul&&li&[2016] &i&MultiCol-SLAM - A Modular Real-Time Multi-Camera SLAM System.&/i& [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Image Based Camera Localization: an Overview&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Ubiquitous real-time geo-spatial localization&/i& [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D3005426& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Robust multimodal sequence-based loop closure detection via structured sparsity&/i&. [&a href=&/?target=http%3A//www.roboticsproceedings.org/rss12/p43.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition&/i&. [&a href=&/?target=http%3A//ieeexplore.ieee.org/document/7839213/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&], [&a href=&/?target=https%3A///hanfeiid/SRAL& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&code&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Precise Localization of an Autonomous Car Based on Probabilistic Noise Models of Road Surface Marker Features Using Multiple Cameras&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Precise-Localization-of-an-Autonomous-Car-Based-on-Jo-Jo/85f9ddf59c9ed0ce80& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Planar Segments Based Three-dimensional Robotic Mapping in Outdoor Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Planar-Segments-Based-Three-dimensional-Robotic-Xiao/ebddeb22f3b5cfe51aaf847ad444e7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Vehicle Localization along a Previously Driven Route Using Image Database&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Vehicle-Localization-along-a-Previously-Driven-Kume-Supp%25C3%25A9/e5a7ac37d281f1e2a571fc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Can priors be trusted? Learning to anticipate roadworks&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Can-priors-be-trusted-Learning-to-anticipate-Mathibela-Osborne/0a7ecf9ee481a51fc12& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Laser Scanner Based Slam in Real Road and Traffic Environment&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Laser-Scanner-Based-Slam-in-Real-Road-and-Traffic-Garcia-Favrot-Parent/2accb1d9f7ce3f08aa1cde735dcca& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Map-Based Precision Vehicle Localization in Urban Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Map-Based-Precision-Vehicle-Localization-in-Urban-Levinson-Montemerlo/924ff97ad4e96f48ad774d982ef3& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&感知&/h2&&ul&&li&[2016] &i&VisualBackProp: visualizing CNNs for autonomous driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Image segmentation of cross-country scenes captured in IR spectrum&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Traffic-Sign Detection and Classification in the Wild&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Traffic-Sign-Detection-and-Classification-in-the-Zhu-Liang/da82e3cad81db857aa75b& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Persistent-self-supervised-learning-principle-from-Hecke-Croon/a48c4c6707fca20ae64b044b6e8f7ffc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Deep-Multispectral-Semantic-Scene-Understanding-of-Valada-Oliveira/8be99dd94bff76c6a7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Joint Attention in Autonomous Driving (JAAD)&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Joint-Attention-in-Autonomous-Driving-JAAD--Kotseruba-Rasouli/1e6a26deea0ac2a6dadc317b50bdf8& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&, &a href=&/?target=http%3A//data.nvision2.eecs.yorku.ca/JAAD_dataset/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&data&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Perception for driverless vehicles: design and implementation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Perception-for-driverless-vehicles-design-and-Benenson-Suarez/bf1c728e3ef720b3f6& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Robust multimodal sequence-based loop closure detection via structured sparsity&/i&. [&a href=&/?target=http%3A//www.roboticsproceedings.org/rss12/p43.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition&/i&. [&a href=&/?target=http%3A//ieeexplore.ieee.org/document/7839213/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&], [&a href=&/?target=https%3A///hanfeiid/SRAL& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&code&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Pixel-wise Segmentation of Street with Neural Networks&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Deep convolutional neural networks for pedestrian detection&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Fast Algorithms for Convolutional Neural Networks&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Fusion of color images and LiDAR data for lane classification&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D2820859& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Environment Perception for Autonomous Vehicles in Challenging Conditions Using Stereo Vision&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Environment-Perception-for-Autonomous-Vehicles-in-Gal%25C3%25A1n-Hayet/8f56fd10fc441fdb5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Intention-aware online POMDP planning for autonomous driving in a crowd&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Intention-aware-online-POMDP-planning-for-Bai-Cai/481aaa5bea7db755862cded42081& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Survey on Vanishing Point Detection Method for General Road Region Identification&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Survey-on-Vanishing-Point-Detection-Method-for-Patel-Mistry/39c6be1ebe2bbefdadbc9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Visual road following using intrinsic images&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Visual-road-following-using-intrinsic-images-Krajn%25C3%25ADk-Blazicek/ccf78bfc80c505d100540f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Rover – a Lego* Self-driving Car&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Rover-a-Lego-Self-driving-Car-Tan-Wojtczyk-Wojtczyk/6e2ffbf8afee& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Classification and Tracking of Dynamic Objects with Multiple Sensors for Autonomous Driving in Urban Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Classification-and-Tracking-of-Dynamic-Objects-Darms-Rybski/6c9ce40060fa3efea7d04a4a0eddf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Generating Omni-directional View of Neighboring Objects for Ensuring Safe Urban Driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Generating-Omni-directional-View-of-Neighboring-Seo/29e53add392de54d439aaf6e9baadeb& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Autonomous Visual Navigation and Laser-Based Moving Obstacle Avoidance&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Visual-Navigation-and-Laser-Based-Cherubini-Spindler/089fa5a7babc906dc46a58f986c5ac8c46aa9017& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Extending the Stixel World with online self-supervised color modeling for road-versus-obstacle segmentation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Extending-the-Stixel-World-with-online-self-Sanberg-Dubbelman/6dd60ef49abff4967& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Modeling Human Plan Recognition Using Bayesian Theory of Mind&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Plan-Activity-and-Intent-Recognition-Baker-Tenenbaum/4cbb1ea46c09d11b0b986a7baaacf8& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Focused Trajectory Planning for autonomous on-road driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Focused-Trajectory-Planning-for-autonomous-on-road-Gu-Snider/03bf26d72d8cc0cf401c31e31c242e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Avoiding moving obstacles during visual navigation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Avoiding-moving-obstacles-during-visual-navigation-Cherubini-Grechanichenko/7c0e580c0fc918aef1df44& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Mobile-robot-navigation-system-in-outdoor-Siagian-Chang/c3d87cd50d1bedb25696& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Obstacle detection and mapping in low-cost, low-power multi-robot systems using an Inverted Particle Filter&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Obstacle-detection-and-mapping-in-low-cost-low-Kleppe-Skavhaug/646cc0e592b77d553cc20fb79a8e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Real-time estimation of drivable image area based on monocular vision&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-estimation-of-drivable-image-area-based-Neto-Victorino/c50a769ce6b6f8d389806& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Road model prediction based unstructured road detection&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Road-model-prediction-based-unstructured-road-Zuo-Yao/b8b2d3daed2988216dbb3ddb6081ed& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Selective Combination of Visual and Thermal Imaging for Resilient Localization in Adverse Conditions: Day and Night, Smoke and Fire&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Selective-Combination-of-Visual-and-Thermal-Brunner-Peynot/85b4b1af84904a1cfc3eeeb605c9bd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Road Tracking Method Suitable for Both Unstructured and Structured Roads&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/International-Journal-of-Advanced-Robotic-Systems-Proch%25C3%25A1zka/4819fda4bca4b30db46ec56aa45bc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Autonomous Navigation and Sign Detector Learning&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Navigation-and-Sign-Detector-Learning-Ellis-Pugeault/0cffeecdcdaf0d11b33e12cf3c67213e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Design-of-a-Multi-Sensor-Cooperation-Travel-Chen-Li/f5feb2a151c54eca66c193ddd3c8b& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Learning in Reality: a Case Study of Stanley, the Robot That Won the Darpa Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-in-Reality-a-Case-Study-of-Stanley-the-Glaser-Hennig/01c1f49f5e7f4e7f5d6& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Portable and Scalable Vision-Based Vehicular Instrumentation for the Analysis of Driver Intentionality&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Portable-and-Scalable-Vision-Based-Vehicular-Beauchemin-Bauer/c76b5bc64ffd6e13a6ca803e5209d5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&What could move? Finding cars, pedestrians and bicyclists in 3D laser data&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/What-could-move-Finding-cars-pedestrians-and-Wang-Posner/f56b01df806bc224d5babba08cb44& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&The Stixel World&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-Stixel-World-N-Im/ff2f18ca5812965dcfb90& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Stereo-based road boundary tracking for mobile robot navigation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stereo-based-road-boundary-tracking-for-mobile-Chiku-Miura/8bcbb1f13f2ab7f974ba30a0d68aeccf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Autonomous Information Fusion for Robust Obstacle Localization on a Humanoid Robot&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Information-Fusion-for-Robust-Obstacle-Sridharan-Li/e5cb801ba421c35ea639& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Learning long-range vision for autonomous off-road driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-long-range-vision-for-autonomous-off-road-Hadsell-Sermanet/2d8f527d1a96b0dae209daa6a241cfd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/On-line-road-boundary-modeling-with-multiple-Matsushita-Miura/0fcac22dceb7a7d49a8cc500a804d9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&The Area Processing Unit of Caroline - Finding the Way through DARPA's Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-Area-Processing-Unit-of-Caroline-Finding-the-Berger-Lipski/4b9db808cce6c7bcd684& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Vehicle detection and tracking for the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Vehicle-detection-and-tracking-for-the-Urban-Darms-Baker/757fbaaa9962819fda64d51307e1& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Low cost sensing for autonomous car driving in highways&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Low-cost-sensing-for-autonomous-car-driving-in-Gon%25C3%25A7alves-Godinho/b7f302bc8eb3de03128& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] _Stereo and Colour Vision Techniques for Autonomous Vehicle Guidance _. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stereo-and-Colour-Vision-Techniques-for-Autonomous-Mark-Proefschrift/51df5ef614a01a55f3da818aae0e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2000] &i&Real-time multiple vehicle detection and tracking from a moving vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-multiple-vehicle-detection-and-tracking-Betke-Haritaoglu/864aecbc4ef6c4da66e4c8bcc83fe560& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&p&&br&&/p&&h2&导航与路径规划&/h2&&ul&&li&[2017] Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car[&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&End to End Learning for Self-Driving Cars&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Convex Optimization Approach to Smooth Trajectories for Motion Planning with Car-Like Robots&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/A-Convex-Optimization-Approach-to-Smooth-Zhu-Schmerling/785b22bbdb04f2dddd798ed3194374f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Routing Autonomous Vehicles in Congested Transportation Networks: Structural Properties and Coordination Algorithms&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Machine Learning for Visual Navigation of Unmanned Ground Vehicles&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Machine-Learning-for-Visual-Navigation-of-Unmanned-Lenskiy-Lee/9b2ed3cd54a7e3a3c7c25b311e1ced& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Real-time self-driving car navigation and obstacle avoidance using mobile 3D laser scanner and GNSS&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-self-driving-car-navigation-and-obstacle-Li-Bao/4e8b5a99ae628eea43d7e7410cdfa7f8a2e847d5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Watch this: Scalable cost-function learning for path planning in urban environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Watch-this-Scalable-cost-function-learning-for-Wulfmeier-Wang/d1e51c7e374dcae98bfb2& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/DeepDriving-Learning-Affordance-for-Direct-Chen-Seff/3babddd87516c0fab%3FcitingPapersSort%3Dis-influential%26citingPapersLimit%3D10%26citingPapersOffset%3D0%26citedPapersSort%3Dis-influential%26citedPapersLimit%3D10%26citedPapersOffset%3D0& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&, &a href=&/?target=http%3A//deepdriving.cs.princeton.edu/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&data&i class=&icon-external&&&/i&&/a&, &a href=&/?target=http%3A//deepdriving.cs.princeton.edu/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&code&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Automatic Driving on Ill-defined Roads: An Adaptive, Shape-constrained, Color-based Method&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Automatic-Driving-on-Ill-defined-Roads-An-Adaptive-Ososinski-Labrosse/36cfe2e94b7bef5a& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&, &a href=&/?target=http%3A//www.aber.ac.uk/en/cs/research/ir/dss/%23road-driving& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&data&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/A-Framework-for-Applying-Point-Clouds-Grabbed-by-Liu-Liang/907189aacae7bff389d6c6dab5168d& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&How Much of Driving Is Preattentive?&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/How-Much-of-Driving-Is-Preattentive--Pugeault-Bowden/bba64fbdcda42663baa9&}

我要回帖

更多关于 rosbag play 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信