BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Canada/Eastern
BEGIN:DAYLIGHT
DTSTART:20210314T030000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20211107T010000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20210515T173142Z
UID:72CFAF82-DD21-4F52-9C3D-7434C9FCB3EC
DTSTART;TZID=Canada/Eastern:20210513T180000
DTEND;TZID=Canada/Eastern:20210513T200000
DESCRIPTION:Please join us for this year IEEE Montréal Keynote Event:\n\nC
 laude Shannon&#39;s 1948 &quot;A Mathematical Theory of Communication&quot; provided the
  basis for the digital communication revolution. As part of that ground-br
 eaking work\, he identified the greatest rate (capacity) at which data can
  be communicated over a noisy channel. His proposed algorithm used on rand
 om codes and a code centric maximum Maximum Likelihood (ML) decoding\, whe
 re channel outputs are compared to all possible input codewords to select 
 the most likely candidate based on the observed channel output. Despite it
 s mathematical elegance\, his code centric decoding algorithm is impractic
 al from a complexity perspective and much work in the intervening 70 years
  has focused on co-designing codes and decoders that enable reliable commu
 nication at high rates.\n\nIn collaboration with Ken Duffy and his group\,
  we introduce a new algorithm\, Guessing Random Additive Noise Deceasing (
 GRAND) for a noise-centric\, rather than code-centric\, ML decoding. The r
 eceiver rank orders noise effect sequences from most likely to least likel
 y\, and guesses accordingly. When inverting\, in decreasing order of likel
 ihood\, noise effect sequences from the received signal\, the first instan
 ce that results in an element of the code-book is the ML decoding. Our res
 ults show that\, with GRAND\, even extremely simple codes\, such as CRCs\,
  match or outperform state of the art code/decoder pairs\, indicating that
  the choice of decoder is likely to be more important than that of code.\n
 \nWe illustrate the practical usefulness of our approach and discuss its h
 ardware implementation\, done with Rabia Yazicigil and her group. The comp
 lexity of the decoding is\, for the sorts of channels generally used in co
 mmercial applications\, quite low\, unlike code-centric ML and the chip is
  able to decode any linear code.\n\nSpeaker(s): Professor Muriel  Medard\,
  \n\nVirtual: https://events.vtools.ieee.org/m/268217
LOCATION:Virtual: https://events.vtools.ieee.org/m/268217
ORGANIZER:ieee.mtl.section@gmail.com
SEQUENCE:6
SUMMARY:IEEE Montreal Keynote Event- It&#39;s all in the noise - universal nois
 e-centric decoding
URL;VALUE=URI:https://events.vtools.ieee.org/m/268217
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;Please join us for this year IEEE Montr&amp;ea
 cute\;al Keynote Event:&lt;/p&gt;\n&lt;p&gt;Claude Shannon&#39;s 1948 &quot;A Mathematical Theo
 ry of Communication&quot; provided the basis for the digital communication revo
 lution. As part of that ground-breaking work\, he identified the greatest 
 rate (capacity) at which data can be communicated over a noisy channel. Hi
 s proposed algorithm used on random codes and a code centric maximum Maxim
 um Likelihood (ML) decoding\, where channel outputs are compared to all po
 ssible input codewords to select the most likely candidate based on the ob
 served channel output. Despite its mathematical elegance\, his code centri
 c decoding algorithm is impractical from a complexity perspective and much
  work in the intervening 70 years has focused on co-designing codes and de
 coders that enable reliable communication at high rates.&lt;/p&gt;\n&lt;div&gt;\n&lt;div&gt;
 \n&lt;div&gt;\n&lt;div&gt;\n&lt;div&gt;\n&lt;div&gt;\n&lt;p&gt;In collaboration with Ken Duffy and his g
 roup\, we introduce a new algorithm\, Guessing Random Additive Noise Decea
 sing (GRAND) for a noise-centric\, rather than code-centric\, ML decoding.
  The receiver rank orders noise effect sequences from most likely to least
  likely\, and guesses accordingly. When inverting\, in decreasing order of
  likelihood\, noise effect sequences from the received signal\, the first 
 instance that results in an element of the code-book is the ML decoding. O
 ur results show that\, with GRAND\, even extremely simple codes\, such as 
 CRCs\, match or outperform state of the art code/decoder pairs\, indicatin
 g that the choice of decoder is likely to be more important than that of c
 ode.&amp;nbsp\;&lt;/p&gt;\n&lt;/div&gt;\n&lt;div&gt;\n&lt;p&gt;&amp;nbsp\;&lt;/p&gt;\n&lt;/div&gt;\n&lt;div&gt;\n&lt;p&gt;We illus
 trate the practical usefulness of our approach and discuss its hardware im
 plementation\, done with Rabia Yazicigil and her group. The complexity of 
 the decoding is\, for the sorts of channels generally used in commercial a
 pplications\, quite low\, unlike code-centric ML and the chip is able to d
 ecode any linear code.&amp;nbsp\;&lt;/p&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\n&lt;/div&gt;\
 n&lt;/div&gt;
END:VEVENT
END:VCALENDAR

