01
15 JULY 2023 - 15 JAN 2024VOL 02 - ISSUE 01
NEWSLET TER
REFLECTIONS: FROM THE VICE CHANCELLOR’S DESK
As we usher in a new year of progress and
innovation, we center this edition of 1729
on the theme of ‘Tech for All’. In these
pages, we delve into the diverse and
impactful ways in which technology must
be harnessed to address societal challen-
ges and promote inclusivity.
These days conversations around us are
centered on data science and its transfor-
mative impact. But how many of us know
about the historical and scientific dimen-
sions of the subject? In the third issue of
1729, Professor Rajesh Gupta, a luminary
in the field, gives us a refresher with exem-
plary insights.
Our campus is a microcosm of society
where we nurture young individuals and
prepare them to not only face real-world
challenges but also address them and make
a difference. Professor M Balakrishnans
journey in assistive technology and social
entrepreneurship serves as an inspiration
for all our faculty and students.
With such academic scholars and outstand-
ing faculty guiding our academic commu-
nity, our students are inspired and are
making strides in leveraging tech in addres-
sing societal problems. These tangible out-
comes highlight our rich academic tapestry
and embody Plaksha’s spirit of innovation,
inclusivity and meaningful impact.
I envision a future where technology serves
all, and Plaksha continues to lead the way.
Christened ‘1729', after Ramanujan - Hardy number, the Plaksha newsletter is a window into our
thriving, interconnected, and learner centered environment where Plakshans look beyond the
obvious, just like Srinivas Ramanujan did with the seemingly dull number ‘1729'.
Through this newsletter we share the contribution of each member of our vibrant community of
learners, researchers, leaders, innovators and problem solvers to reimagining technology education.
Prof. Rudra Pratap
Founding Vice Chancellor
Plaksha University
T E C H F O R A L L
Tech must address
societal challenges
and promote
inclusivity
02
Modeled after the visual cortex, the resulting
convolutional neural networks provided a
systematic means of optimization that auto-
matically identified spatial ‘features’ in images.
By early 2000, advances in computing hard-
ware demonstrated measurable progress in
the ability of convolutional neural networks to
classify images and handwritten text.
Driven by a standard set of benchmark chal-
lenges, the progress in automatic image classi-
fication was rapid and dramatic. While it is still
not very clear as to why these networks - the
so-called deep neural networks - worked, a
combination of happy coincidences, including
the availability of large amounts of labeled
images over the internet, cloud computing,
and significant parallel processing using
graphical computing co-processors enabled
advances in pattern recognition that will soon
expand beyond recognizing images to speech
and text. In its latest incarnation, AI had literally
and metaphorically become synonymous with
the ability to see or the ‘eyes of the machine’.
In our attempt to understand images, mankind
had walked into replicating the processes by
which we understood images. Another branch
of research in neural networks put these into
the feedback loop, and challenged the network
to generate an image given the feature/label,
the outcome of this network was then fed to
another neural network that tried to classify it.
Thus, modeled as an adversarial game
between a generation and a discrimination
network, the roots of generative AI took place.
It was only a matter of time before the atten-
tion shifted from two-dimensional images to
one-dimensional input of text, speech. Machine
translation was, for instance, the task of trans-
forming one such serial input into a serial
output, the so-called, pre-trained transformer.
A significant result about six years ago pointed
out that convolution was not needed for such
transformation on text, and a simpler mecha-
nism provided the necessary capability to
identify semantically relevant text pieces
regardless of their distance in the source text.
These advances are now at the root of the
continuing evolution of neuromorphic comput-
ing architectures in ‘generative AI’.
As we close the circle from images to text, we
also make another important transition from
engineering, which was all about numbers, to
text and symbolic processing. While numbers
DATA SCIENCE AND INTELLIGENCE: SEEING THROUGH THE ‘EYES’
OF HISTORY AND SCIENCE
The Bastille fell in 1789, 13 years after the
American Declaration of Independence, a period
marked by war and upheaval on both sides of the
Atlantic, one that demanded technological advan-
tages to win the war. Among the two areas that
Napoleon sought to overcome the British were
steel and the conversion of heat to work/motion.
He tasked a young Sadi Carnot to build these
capabilities in a small military school that gave us
the polytechnic legacy, a precursor to engineering
as a discipline perfected in the United States.
This led to the peacetime dividend of the 1840s,
a period also marked as the European Spring
before we entered the next phase of upheaval
between the two world wars. Our engineering
legacy defined a century of enormous progress.
It raised a talent pool trained to be engineers
from the beginning, for no amount of scientific
knowledge alone would make an engineer.
A century after the rise of the first engineering
schools in America, we saw the emergence of
computing as a discipline with its roots in engi-
neering and math and an early glimpse into
computation and computing machines. Almost
immediately, it gave a glimpse into how human
thought could be emulated and the vision of
‘thinking machines’. Thus, the first forays into
artificial intelligence (AI) rose in the early 1950s.
In the six decades since, AI has seen three
generations of technological advances, each
with its own ‘winters’ as the early excitement
eventually led to disappointments in putting
these advances to meaningful use, at least in
competition to more targeted domain knowl-
edge. The vision of intelligent machines was
contrasted by the reality of failures: machine
translation, speech recognition, expert systems.
Yet, progress was being made, slowly and
steadily. Optical character recognition was being
put into practical use, and it became a driver
for limited image classification efforts such as
handwriting recognition on a network of
‘neurons’ as multiplicative weight elements that
updated their weights based on known outputs
by back-propagation of error correction, a
version of chain rule that efficiently computed
gradient of a loss function. In the meantime,
driven by discoveries by neuroscientists in how
the optical nervous system was organized, the
single-layer network of neurons in neural net-
works transitioned to alternating layers of
complex/simple neural networks interspersed
with normalization/non-linear functions.
T E C H F O R A L L
Plaksha Think Tank