2018 Western New York Image and Signal Processing Workshop


Rochester IEEE

SPS The Western New York Image and Signal Processing Workshop (WNYISPW) is a venue for promoting image and signal processing research in our area and for facilitating interaction between academic researchers, industry researchers, and students.  The workshop comprises both oral and poster presentations.

The workshop, building off of 20 successful years of the Western New York Image Processing Workshop (WNYIPW), is run by the Rochester chapter of the IEEE Signal Processing Society with sponsorship from the Rochester chapter of the Society for Imaging Science and Technology, NVIDIA, and Mathworks.

The workshop will be held on Friday, October 05, 2018, in Louise Slaughter Hall (Building SLA/078) at Rochester Institute of Technology in Rochester, NY.


  • Formation, Processing, and/or Analysis of Signals, Images, or Video
  • Computer Vision
  • Information Retrieval
  • Image and Color Science
  • Applications of Image and Signal Processing, including:
    • Medical Image and Signal Analysis
    • Audio Processing and Analysis
    • Remote Sensing
    • Archival Imaging
    • Printing
    • Consumer Devices
    • Security
    • Surveillance
    • Document Imaging
    • Art Restoration and Analysis
    • Astronomy

 Invited speakers include:


Please visit our website for more information!

Thank you to all of our sponsors!


Registration fees:

General Registration:

General Registration: $60 (with online registration by 09/21), $70 (after 09/21)
Student Registration: $40 (with online registration by 09/21), $50 (after 09/21)
(Students, please consider joining IEEE! ---  save $10 on this event by joining!   student members only pay $30 to join IEEE!)

IEEE or IS&T Members:

IEEE or IS&T Members: $35 (with online registration by 09/21), $50 (after 09/21)
IEEE or IS&T Student Members: $25 (with online registration by 09/21), $40 (after 09/21)

Note: Can join IS&T at registration desk for $20

Onsite registration will be also available, with onsite registration fees payable by cash or check. Fees enable attendance to all sessions and include breakfast, lunch, and afternoon snack.

  Date and Time




  • Rochester Institute of Technology
  • Rochester, New York
  • United States
  • Building: Louise Slaughter Hall (Building SLA/078)

Staticmap?size=250x200&sensor=false&zoom=14&markers=43.0845894%2c 77
  • Raymond Ptucha

    Raymond Ptucha, PhD 
    Chair, Signal Processing Society; Rochester Chapter

  • Starts 25 August 2018 08:54 AM
  • Ends 05 October 2018 05:00 PM
  • All times are America/New_York
  • 0 spaces left!
  • Admission fee ?
  • Register


Dr. Lucey
Dr. Lucey of Carnegie Mellon University, The Robotics Institute


How Do You Know What a Deep Network is Learning for a Vision Task?

Modern deep learning algorithms are able to learn on training sets such that they achieve almost zero train error. What is all the more amazing, is that this performance tends to generalize well to unseen data - especially for visual detection and classification tasks. Increasingly, deep methods are being utilized in vision tasks such as object tracking and visual SLAM. These tasks differ fundamentally to traditional vision tasks where deep learning has been effective (e.g. object detection and classification) tasks as they are attempting to model the relative relationship between image frames. Although receiving state of the art performance on many benchmarks, it is easy to demonstrate empirically that deep methods are not always learning what we want them to learn for a given visual task - limiting their practical usage in real-world applications. In this talk we shall discuss recent advances my group has made towards making better guarantees over the generalization of deep learning methods for visual tasks where the relative relationship between images is important - most notably object tracking and VSLAM. In particular we shall discuss a new paradigm for efficient and generalizable object tracking which we refer to as Deep-LK. We shall also, discuss how these insights can be utilized in recent applications of deep learning to VSLAM. Finally, we will show some initial results on how geometric constraints can be elegantly combined with deep learning to further improve generalization performance.

Results from a recent CVPR 2018 paper: Chaoyang Wang, Jose Miguel Buenaposada,Rui Zhu, Simon Lucey
Learning Depth From Monocular Videos Using Direct Methods


Simon Lucey is an Associate Research Professor within the Robotics Institute at Carnegie Mellon University, where he is part of the Computer Vision Group, and leader of the CI2CV Laboratory. He is also a Senior Research Scientist at Argo AI (an autonomous vehicle startup based in Pittsburgh). Before returning to CMU he was a Principle Research Scientist at the CSIRO (Australia's premiere government science organization) for 5 years. A central goal of his research approach is to stay true, in terms of research and industry engagement, to the grand goals of computer vision as a scientific discipline. Specifically, he wants to draw inspiration from vision researchers of the past to attempt to unlock computational and mathematic models that underly the processes of visual perception.


Address:Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania, United States, 15213


Important Dates

Paper submission opens: August 13, 2018
Paper submission closes: September 17, 2018
Notification of Acceptance: September 24, 2018
Early (online) registration deadline: September 21, 2018
Submission of camera-ready paper: October 8, 2018
Workshop: Octover 05, 2018






Tentative Conference at a Glance 

  • 8:30-8:55am, Registration, breakfast
  • 8:55-9am, Welcome
  • 9am-12:30pm, Oral presentations
  • 10:30-12:30am, Tutorial
  • 12:30-2pm, Lunch and posters
  • 2-3pm, Keynote
  • 3-5pm, Oral presentations
  • 3-5pm, Tutorial
  • 5-5:15pm, Awards


To encourage student participation, a best student paper and best student poster award will be given. 



2018 Western New York Image and Signal Processing Workshop