BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
DTSTART:20260308T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20261101T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260322T150208Z
UID:27EFBBA3-8F41-4429-8560-6A339D57693B
DTSTART;TZID=America/New_York:20260319T100000
DTEND;TZID=America/New_York:20260319T110000
DESCRIPTION:Title: View Planning for 3D Reconstruction of Plants\n\nDr. Nik
 os Papanikolopoulos\nDirector. Robotics Institute\, UMN\n\n[]\n\nAbstract:
  Active vision (AV) has been in the spotlight of robotics research due to 
 its emergence in\nnumerous applications\, including agriculture and biomed
 icine\, to name a few. A major AV problem that has gained popularity is th
 e 3D reconstruction of targeted environments from multiple 2D views. While
  collecting and processing a large number of arbitrarily taken 2D images m
 ay become an arduous process in several practical settings\, an efficient 
 solution is to seek the optimal placement of available cameras in the 3D s
 pace to obtain the necessary visual information from fewer yet more inform
 ative images to effectively reconstruct environments of interest. This pro
 cess\, termed as view planning (VP)\, can be markedly challenged in the pr
 esence of noise emerging in the environment\, location of the cameras\, an
 d/or in the extracted images.\n\nWe present an efficient and realistic VP 
 pipeline\, which aims to optimize the viewpoints of cameras and hence the 
 quality of the 3D reconstruction of a field of row crops without the need 
 for a given mesh model. This is achieved within four steps: (i) an initial
  flight to obtain a sparse point cloud\, (ii) the generation of an initial
  simple mesh model utilizing the sparse point cloud\, (iii) the planning o
 f images via a discrete optimization process\, and (iv) a second flight to
  obtain the final reconstruction. We demonstrate the effectiveness of the 
 proposed VP framework against commonly used baseline methods for agricultu
 ral data collection and processing. This is joint work with A. Bacharis\, 
 H. Nelson\, K. Polyzos\, and G. Giannakis\n\nBio: Prof. Papanikolopoulos (
 IEEE Fellow\, NAI Fellow) received his Ph.D. in Electrical and Computer En
 gineering from Carnegie Mellon University. His thesis was entitled “Cont
 rolled Active Vision” and focused on using computer vision in a controll
 ed fashion to detect\, track\, and manipulate objects in the environment.\
 n\nHis research work has focused on robotics\, agriculture\, image process
 ing\, computer vision\, and intelligent transportation systems. He has rec
 eived numerous honors and awards for his research and contributions. He ha
 s been a Distinguished McKnight University Professor at the University of 
 Minnesota since 2007 and has been a McKnight Presidential Endowed Professo
 r in Computer Science since 2016. In 2016\, he received the IEEE RAS Georg
 e Saridis Leadership Award in Robotics and Automation as well as the Cente
 r for Transportation Studies Research Partnership Award.\n\nSpeaker(s): Ni
 kos\, Nikos\n\nVirtual: https://events.vtools.ieee.org/m/545644
LOCATION:Virtual: https://events.vtools.ieee.org/m/545644
ORGANIZER:guoyulu62@gmail.com
SEQUENCE:37
SUMMARY:View Planning for 3D Reconstruction of Plants
URL;VALUE=URI:https://events.vtools.ieee.org/m/545644
X-ALT-DESC:Description: &lt;br /&gt;&lt;div&gt;Title: View Planning for 3D&amp;nbsp\; Recon
 struction of Plants&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;Dr
 . Nikos Papanikolopoulos&lt;/div&gt;\n&lt;div&gt;Director. Robotics Institute\, UMN&lt;/d
 iv&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;&lt;img src=&quot;https://events.vtools.ieee.org/vto
 ols_ui/media/display/d5ef528c-2fc3-452e-bc4a-264a0d369ae5&quot; alt=&quot;&quot; width=&quot;3
 65&quot; height=&quot;437&quot;&gt;&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;Abstract:&amp;nbsp\;Active v
 ision (AV) has been in the spotlight of robotics research due to its emerg
 ence in&lt;br&gt;numerous applications\, including agriculture and biomedicine\,
  to name a few. A major AV problem that has gained popularity is the 3D re
 construction of targeted environments from multiple 2D views. While collec
 ting and processing a large number of arbitrarily taken 2D images may beco
 me an arduous process in several practical settings\, an efficient solutio
 n is to seek the optimal placement of available cameras in the 3D space to
  obtain the necessary visual information from fewer yet more informative i
 mages to effectively reconstruct environments of interest. This process\, 
 termed as view planning (VP)\, can be markedly challenged in the presence 
 of noise emerging in the environment\, location of the cameras\, and/or in
  the extracted images.&lt;/div&gt;\n&lt;div&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;We present an effic
 ient and realistic VP pipeline\, which aims to optimize the viewpoints of 
 cameras and hence the quality of the 3D reconstruction of a field of row c
 rops without the need for a given mesh model. This is achieved within four
  steps: (i) an initial flight to obtain a sparse point cloud\, (ii) the ge
 neration of an initial simple mesh model utilizing the sparse point cloud\
 , (iii) the planning of images via a discrete optimization process\, and (
 iv) a second flight to obtain the final reconstruction. We demonstrate the
  effectiveness of the proposed VP framework against commonly used baseline
  methods for agricultural data collection and processing. This is joint wo
 rk with A. Bacharis\, H. Nelson\, K. Polyzos\, and G. Giannakis&lt;/div&gt;\n&lt;di
 v&gt;&amp;nbsp\;&lt;/div&gt;\n&lt;div&gt;Bio:&amp;nbsp\; Prof. Papanikolopoulos (IEEE Fellow\, NA
 I Fellow) received his Ph.D. in Electrical and Computer Engineering from C
 arnegie Mellon University.&amp;nbsp\; His thesis was entitled &amp;ldquo\;Controll
 ed Active Vision&amp;rdquo\; and focused on using computer vision in a control
 led fashion to detect\, track\, and manipulate objects in the environment.
 &amp;nbsp\;&lt;/div&gt;\n&lt;p&gt;His research work has focused on robotics\, agriculture\
 , image processing\, computer vision\, and intelligent transportation syst
 ems.&amp;nbsp\; He has received numerous honors and awards for his research an
 d contributions.&amp;nbsp\; He has been a Distinguished McKnight University Pr
 ofessor at the University of Minnesota since 2007 and has been a McKnight 
 Presidential Endowed Professor in Computer Science since 2016.&amp;nbsp\; In 2
 016\, he received the IEEE RAS George Saridis Leadership Award in Robotics
  and Automation as well as the Center for Transportation Studies Research 
 Partnership Award.&amp;nbsp\; &amp;nbsp\;&lt;/p&gt;
END:VEVENT
END:VCALENDAR

