Modern Web Development: Brought To You in Partnership With
Modern Web Development: Brought To You in Partnership With
Development
03 Welcome Letter
Jacob Doiron, Software Engineer at Devada, DZone’s Parent Company
DZONE RESEARCH
ADDITIONAL RESOURCES
In a world where access to the internet has prospered, Along with the module bundlers and package managers, CI/
approaches to web development have shifted drastically. CD tools played a large part in the effort needed to deploy
They are both simpler — from an overall setup perspective web applications. Where server knowledge was once
— while also increasing the complexity of the systems which required to set up front-end and back-end web servers, new
power them. approaches were made available that reduced the barrier to
entry.
Much of what goes into standing up a modern website has
been organized into a more robust package that requires Nowadays, there are many hosting companies that allow a
far less systems knowledge and infrastructure setup from developer to simply fill out a basic form with a link to their
developers. Module bundlers like webpack and rollup, while GitHub repo, and they can automatically spin up a server
having a high learning curve initially, both aim to reduce the on-demand that just works. Automations like this really
overall project complexity by automating tasks. enable just about anyone to start up a website of their own
with very little knowledge. They’re also great because they
What was once a balancing act for bundling various
often provide webhooks directly into a git repository and
asset files to be served efficiently is now rather trivial and
can automatically redeploy to your domain any time a new
configurable through just a few simple configurations. It is
change is pushed to a specific branch and your site won’t
tools like these which allow developers to quickly start new
even go down in the process.
web applications without much fuss and everything just sort
of magically works from the get-go. With all of the recent advancements made in pursuit of
making the lives of software developers easier, it is no
With modern tools such as these, there was also a large push
surprise companies are more focused on their websites
to orchestrate the facilitation of dependency management.
now more than ever. These are but some of the factors that
Libraries like bower, which were traditionally used, were
led to modern web development trends. Continue reading
gradually phased out after node.js splashed into the scene.
to find out what else contributed to making modern web
They were superseded by more modern package managers
development modern.
like npm. Npm really allowed developers to easily manage
dependencies for their projects, discover new dependencies, Sincerely,
and easily submit packages of their own for other developers
to use and contribute to, all in one centralized location.
Jacob Doiron
Jacob is a Software Engineer at Devada, developing specifically for DZone. He was first introduced to
software development after finding curiosity in third-party game clients and exploring the extent to
which gameplay could be automated. When he is not focused on addressing JIRA tickets, he can be
found playing WoW, Rocket League, reading a fantasy novel, or playing with his rescue dog, Shadow.
DZone Publications
Meet the DZone Publications team!
Publishing Refcards and Trend Reports year- DZone Mission Statement
round, this team can often be found editing At DZone, we foster a collaborative environment that empowers
contributor pieces, working with Sponsors, developers and tech professionals to share knowledge, build
and coordinating with designers. Part of their skills, and solve problems through content, code, and community.
everyday includes working across teams,
We thoughtfully — and with intention — challenge the status
specifically DZone’s Client Success and Editorial
quo and value diverse perspectives so that, as one, we can inspire
teams, to deliver high-quality content to the
positive change through technology.
DZone community.
Lindsay is a Publications Manager at DZone. Reviewing contributor drafts, working with sponsors,
and interviewing key players for “Leaders in Tech,” Lindsay and team oversees the entire Trend
Report process end to end, delivering insightful content and findings to DZone’s developer audience.
In her free time, Lindsay enjoys reading, biking, and walking her dog, Scout.
As a Publications Manager, Melissa co-leads the publication lifecycle for Trend Reports — from
coordinating project logistics like schedules and workflow processes to conducting editorial
reviews with DZone contributors and authors. She often supports Sponsors during the pre- and
post-publication stages with her fellow Client Success teammates. Outside of work, Melissa passes
the days tending to houseplants, reading, woodworking, and adoring her newly adopted cats,
Bean and Whitney.
With twenty-five years of experience as a leader and visionary in building enterprise-level online
communities, Blake plays an integral role in DZone Publications, from sourcing authors to surveying
the DZone audience and promoting each publication to our extensive developer community, DZone
Core. When he’s not hosting virtual events or working with members of DZone Core, Blake enjoys
attending film festivals, covering new cinema, and walking his miniature schnauzers, Giallo and Neo.
John Esposito works as technical architect at 6st Technologies, teaches undergrads whenever they
will listen, and moonlights as research analyst at DZone.com. He wrote his first C in junior high and
is finally starting to understand JavaScript NaN%. When he isn’t annoyed at code written by his
past self, John hangs out with his wife and cats Gilgamesh and Behemoth, who look and act like
their names.
In February 2021, DZone surveyed software developers, architects, and other IT professionals in order to understand how web
applications are built today.
2. Performance
Methods:
We created a survey and distributed it to a global audience of software professionals. Question formats included multiple
choice, free response, and ranking. Survey links were distributed via email to an opt-in subscriber list, popups on DZone.com,
and short articles soliciting survey responses posted in a web portal focusing on topics related to web development. The survey
was opened on February 8th and closed on February 17th, recording 921 responses.
In this report, we review some of our key research findings. Many secondary findings of interest are not included here.
Additional findings will be published piecemeal on DZone.com.
1. The web platform (http+urls with client-side html/js/css) has been capable of more than ad hoc DOM manipulation for
decades now. But (anecdotally) many web developers still spend countless hours doing little more than tossing divs
around with jQuery.
We wanted to know how richly developers take advantage of the web as a true application platform, rather than as
glorified, scriptable markup.
2. The multi-paradigmatic character of JavaScript (JS) imposes fewer design constraints than more one-paradigm-oriented
languages do. This results in a huge variety of approaches to JS software design.
Moreover, in such a language-unconstrained environment, and especially since many junior developers are assigned
web programming tasks, high-level decisions about how to approach a solution — as commands, as entities, or as
transformations — are often made without a too-critical eye toward overall software architecture and system design.
We wanted to know how, and how much, developers think about building imperative, object-oriented, or functional
solutions in JS.
3. Client smartness seems to oscillate over time. The simplest distributed system is client-server with a network protocol in
between, and the simplest widespread example of this is a web application.
In this clearest and most ubiquitous case of distributed computing, we wanted to understand how developers are
splitting work between client and server, and how they are addressing problems that arise from this split.
Note: Recently, we published our research findings on the client-smartness pendulum with respect to protocol-
neutral multi-node systems from results of our survey on edge computing.
When fairly standard C compiler settings result in a slower binary than a Node application of the same complexity
— as has famously happened in suitably contrived benchmarks — at least casual multi-platform programmers (if
not legitimate C experts) must take JS performance more seriously. Moreover, as dependency trees inflate at npm-
accelerated speed, web application performance grows increasingly likely to falter unexpectedly.
We wanted to know where developers are currently encountering web application performance problems and how
they are solving them.
5. Finally, JavaScript has been adding higher-level programming concepts, along with some very nice syntactic sugar,
rapidly since at least ES2015.
Some enable new levels of dynamism and abstraction; others simply make code easier to write and read. We wanted
to know how developers think about and actually use these relatively recent features of JavaScript.
We asked:
What percent of the JavaScript you have written over the course of your career follows each of these programming
paradigms?
And:
What percent of the JavaScript you have written within the past year follows each of these programming paradigms?
Results (n=801):
Table 1
Caveat lector: The “other” response count was higher than we expected. Our guess is that a significant number of respondents
identify their code as following orthogonal concepts like procedural, dynamic, or actor-oriented. In future surveys, we will
include more programming paradigm options.
Since some concepts excluded from answer options are somewhat more allied with answer options we did include (procedural
with “imperative,” actor-oriented with “object-oriented”), the significant number of “other” responses in this survey may skew
the picture painted by our analysis. Our intent was not merely to measure the three paradigm buckets included but rather to
understand how JS developers approach software design at a high level.
Reasoning: Earlier JavaScript programs were simpler and therefore required less structure. As JS programs grew more
complex, the additional structure offered by object-oriented design has become more useful.
Further, JavaScript applications matured during the same time period that Java (and later C#) came to dominate server-side
development, so perhaps Java paradigms rubbed off on JS developers.
Observations:
1. The hypothesis was verified, although to a lesser degree than we expected. JavaScript grew more object-oriented by 2.6%
over the course of respondents’ careers (38.6%) vs. within the past year (41.2%).
2. However, the overall gap size was skewed by experience levels. Among senior developers (>5 years of professional IT
experience), object-orientedness grew by 4.5% (38.6% vs. 43.1%). Among junior developers, object-orientedness grew only
by 1.3% (36.4% vs. 37.7%).
Since the time gap between “over the course of my career” and “within the past year” is greater for senior developers,
we consider the 4.5% number more relevant to our original hypothesis. For this reason, we still consider the
hypothesis verified.
3. We conjecture that the smaller growth in object-orientedness over the course of junior developers’ careers results from
relative immaturity as JavaScript developers. Broadly speaking, it takes more programmer maturity to design object-
oriented systems than to write imperative commands.
This generalization is no doubt complicated by education, senior mentorship, and complexity of projects available to
be worked on. But it is interesting that junior JS developers not only have grown less in object orientation, but also
write less object-oriented code in absolute percentages than senior developers do — despite the prevalence of object-
oriented design in computer science and software engineering undergraduate coursework.
Note: Junior developers also write significantly more functional JS than senior developers. For more on this, see the
discussion of the next hypothesis.
HYPOTHESIS TWO
Hypothesis: JavaScript code has become more functional over time.
Reasoning: Similar to our reasoning for object-oriented hypothesis one above. Because functional programming has
been trending over the past five years or so (although if Alonzo Church is considered its inventor, it is far older than object-
orientation), we also expected junior developers to have written more functional code than senior developers.
Observations:
1. The hypothesis was weakly verified. JavaScript grew more functional by 2.3% over the course of respondents’ careers
(34.3%) vs. within the past year (36.6%).
2. Here the difference in functional JS growth between junior and senior developers was insignificant (1.2% vs. 2.8%).
3. However, junior developers reported writing significantly more functional JS (37.9% over the course of their careers and
39.1% within the past year) than senior developers (33% over the course of their careers and 35.8% within the past year).
This is consistent with our expectation based on recent functional trends in software development in general.
Note also that although on this expectation the “over the course of career” number is likely to be lowered by senior
developers’ years of pre-functional-trend experience, even the “within the past year” number is higher for junior
developers, though less so (4.9% difference vs. 3.3% difference).
Reasoning: Java — both as a language and as a contingent ecosystem — pressures towards OO. JavaScript does not pressure
toward OO as a language.
Observations:
1. Table 2 shows the hypothesis was verified. Java developers reported writing 39.6% of JS in an object-oriented paradigm
over the course of their careers and 42.4% of JS in an OO paradigm within the past year. JS developers (combining client-
and server-side) reported writing 36.4% of JS in an OO paradigm over the course of their careers and 39.7% of JS in OO
paradigm within the past year.
Table 2
2. Table 3 shows the functional JavaScript gap was greater and in the opposite direction. JS developers report 37.5%
functional JS over careers and 39.1% functional JS within the past year, vs. Java developers at 32.4% functional JS over
career and 35.4% within the past year.
Table 3
Since Java has begun to support functional programming more seriously only since version 8, we might expect the
Java developer’s headspace to move “more functional” going forward (on the assumption that many problems solved
in Java are well solved by a functional approach). And we will monitor this across other languages used by Java-
primary developers.
• Desktop and mobile machines have added orders of magnitude of CPU and memory
• Our own development work on web applications has grown increasingly client-side
Besides, server-side web frameworks have historically associated business logic with the server, which resulted in decades of
server-side wrapper classes that may now structure data up and down the stack. This means that — in parallel to early “dumb”
terminals — “client” and “presentation layer” have correlated in a way that is not logically necessary.
We asked:
In all web applications you have worked on over the course of your career, what percent of business logic is located in client
(browser) vs. server?
And:
In the web application you have worked on most recently, what percent of business logic is located in client (browser) vs.
server?
Results (n=796):
Table 4
Observations:
1. Far more business logic is located on the server than on the client. But almost a third is located in the client. The client is,
at least, not merely a vanishingly thin presentation layer.
2. In future surveys, we will dig deeper into what sort of exact business logic is being implemented where.
For example, presumably hard authorization logic is located on the server, but where security is not a concern (e.g., if
users are authorized to see things but may set a preference not to), the location of show/hide logic may vary. If bits on
the wire are being counted carefully, then show/hide logic should be located on the server; if it is not, then perhaps
bits on the wire are not being counted carefully.
HYPOTHESIS FOUR
Hypothesis: More business logic is implemented client-side recently vs. over developers’ careers.
Observations:
1. The hypothesis was falsified. There is no significant difference in the client vs. server distribution of business logic now vs.
over the course of respondents’ careers. This lack of significant difference remained when we segmented respondents by
seniority.
2. However, Junior developers reported implementing significantly more business logic client-side (40.6% over career, 39.3%
most recently) vs. senior developers (28.5% over career, 29.8% most recently).
3. Perhaps the discrepancy between senior and junior developers’ client-server business logic distribution, combined with
the lack of difference between the (orthogonal) measures “over career” vs. “most recently,” may be accounted for by
legacy code that senior developers are more likely to manage — or by senior developers retaining server-side mental
habits.
4. There is a possible sign of pressure toward server-side business logic independent of developer preference. See the
discussion of the question, “In most web applications, the client should be as ‘dumb’ as possible,” below.
COMPUTATIONAL WORK
Perhaps a billion consumer CPUs are going to waste; perhaps we need a new SETI@Home (and no, we do not mean crypto
mining…although for more on this, see below). Our AWS bill, at least, tells us we should probably think about pushing more
computational work to the client. We wanted to see if developers are doing so.
We asked:
In all web applications you have worked on over the course of your career, what percent of computational work is located in
client (browser) vs. server?
And:
In the web application you have worked on most recently, what percent of computational work is located in client (browser)
vs. server?
Results (n=789):
Table 5
Observations:
1. Servers remain dominant in computational work. This is to be expected: Despite WebAssembly, WebWorkers, and Moore’s
Law, web browsers are fundamentally paper replacements.
2. We expected that more computational work would be done on the client recently vs. over the course of developers’
careers. But as with the business logic client/server split, this turned out not to be the case.
3. Again, junior developers tend to locate significantly more work on the client (37.7% over career, 36.6% recently) vs. senior
developers (26.5% over career, 28.0% recently).
4. The client/server breakdowns for business logic and computational work match suspiciously closely. Perhaps this is
because:
• The questions were too hard to answer off the top of respondents’ heads, so they picked 70/30 as a likely
server/client split.
• The line between business logic and computational work is blurred, especially in complex (but numerically light)
enterprise apps.
In future surveys, we will break down computational work as well as business logic.
• Local browser storage mechanisms have grown more sophisticated since early HTML5.
• Server-side caching mechanisms optimized for high-volume, low-complexity requests have proliferated, as NoSQL and
in-memory database management systems mature beyond simple key-value stores.
• HTTP.1/1’s persistent TCP connections have reduced fully handshaken connections (significantly in theory).
We wanted to know how web developers are maintaining data locality, so we asked:
How often do you implement the following client-side and server-side storage and caching mechanisms in your web
applications?
Results (n=738):
Figure 1
Server-side Server-side
Client-side
app-level cache external
SQLite or Transport-level
Cookies sessionStorage localStorage IndexedDB (e.g., hash table caches (e.g.,
something byte caching
managed by memcached,
similar
app code) Redis)
Observations:
1. The most commonly used caching mechanisms (summing “sometimes” and “often” responses) were client-side: cookies,
sessionStorage, and localStorage.
Surprisingly, cookies are not far and away the most commonly used local storage mechanism: 39.7% of respondents
report using both cookies and sessionStorage “often,” and 38.5% use localStorage “often.” Since none of these
storage mechanisms offer much structure for free, our guess is that cookies benefit from legacy habits while
sessionStorage and localStorage benefit from a cleaner JavaScript API.
2. IndexedDB usage is higher than we expected: 20% of respondents use it “often” and 37% use it “sometimes” — so a
significant majority.
We expected that “client-side SQLite or something similar” would be used more widely than IndexedDB. But while
“sometimes” numbers were close enough as to be effectively identical (37.1% for IndexedDB vs. 36.2% for SQLite vel
sim), “often” numbers diverged significantly: 20.2% for IndexedDB vs. 12.5% for SQLite vel sim.
Since SQLite requires less learning for anyone who knows SQL than IndexedDB, the significantly greater popularity
of IndexedDB may indicate that developers are happy to spend time familiarizing themselves with a less-familiar
standard rather than select the most familiar tool. Developers are lazy insofar as they avoid pointless labor; they are not
lazy by avoiding learning.
This seems to be a matter of necessity, or good judgment, more than preference or bias: Respondents who separately
indicated that they prefer smart clients (i.e., those who “disagree” or “strongly disagree” with the statement, “In most
web applications, the client should be as ‘dumb’ as possible”) were only 2.4% more likely to use IndexedDB than those
who prefer dumb clients (37.8% vs. 35.4%).
4. JavaScript developers were significantly more likely to use more sophisticated client-side storage (IndexedDB)
“sometimes” than Java developers (40.3% vs. 35.7%).
But another data point complicates the picture: Java developers are half again more likely to use IndexedDB “often”
than JS developers (24.1% vs. 16.4%). Perhaps Java developers are more comfortable with, or dependent on, structured
data than JS developers; or perhaps a larger portion of JS developers do not work with complex data than
Java developers.
In future surveys, we will try to understand why developers make the storage choices they do.
5. Also unsurprising: Java developers were more likely to use server-side application-level caches “often” than JS developers
(42% vs. 29.3%). Java developers have hashtables on the brain.
PERFORMANCE
Understanding the performance of any non-deterministic system, including web applications, is a vast and frustrating
endeavor, so we did not attempt this in our survey. Instead, we simply wanted to know how hard developers are pushing web
apps and which bottlenecks they most worry about.
We wanted to know the answer to this profound question, and also to less gilded but more ethical questions like, “how much
are developers doing physics in browsers?”
We asked:
How often have you written code to perform any of the following computationally intensive tasks in-browser?
Results (n=734):
Figure 2
Search Aggregation
Machine Cryptocurrency Physical Network/graph Number
WebGL tokenization/ for interactive
learning mining simulations analysis crunching
indexing reporting
1. The most commonly implemented task was also the least necessarily computationally intensive: Aggregation for
interactive reporting was reported as “sometimes” (40.3%) or “often” (15.3%) implemented by almost two thirds of
respondents.
2. Impressively, 15% of respondents reported cryptocurrency mining in-browser. Intuitively, this number seems quite high.
Either our intuitions are awful, or respondents are joking.
Assuming the former, the amount of cryptocurrency mining being written for browsers is more than trivial. How this
translates to actually used CPU cycles is unclear; for this more interesting question, empirical and objective research,
not survey-based, would be needed. But it is not immediately obvious how to crawl this, aside from looking for usage
of known cryptocurrency JS libraries (number-crunching code is less likely to use meaningful variable names).
Note: We welcome suggestions that readers might have for how to approach this problem — are there reliable ways
to infer that a bit of code is doing brute prime factorization? — at [email protected].
3. Apart from interactive reporting, the most commonly “often” implemented task was search tokenization/indexing (13%).
This is higher than we expected, but on reflection matches an experience we had not previously noticed — in practice,
we have needed to do some kind of tokenization and indexing in-browser much more frequently than the word
“search” might call to mind.
In future research, it would be interesting to see what kinds of tokenization and indexing is being done in browser.
Again, this is an impressive number, but graphics are also likely to be implemented for fun, or as a challenge. So in
future surveys, we will distinguish “for work” and “for fun” on this question.
5. Far more machine learning is being done in browser than we expected: “sometimes” 23.2% and “often” 6.3%.
This bears further investigation, which we plan in our forthcoming data science and data engineering research.
Note: Since “machine learning” can be interpreted as capaciously as “a stack of linear regressions with sigmoids
interspersed” or “any hill-climbing algorithm,” it is possible that much of this “machine learning” work is not
maximally computationally intensive.
6. Caveat lector: Overall, the frequency of computationally intensive in-browser programming is much higher than we
expected.
We currently suspect that the phrasing of the question was slightly misleading versus our original interpretation. Our
guess is that if we were to replace “how often have you written” with “how many hours have you spent writing” or “how
many lines of code have you written,” reported numbers would be significantly lower.
HOW OFTEN DEVELOPERS THINK ABOUT SPECIFIC WEB APPLICATION PERFORMANCE CONSTRAINTS
So much for how often serious computations are done in browser. Now the flip side: Where are the performance bottlenecks?
Since our perspective is the developer’s, not the sysadmin’s, we wanted, not an objective empirical measure of bottlenecks, but
a sense of how much developers worry about which possible web performance bottlenecks.
We asked:
How often do you think about the following performance constraints when building client-side web applications?
Results (n=737):
Network speed:
CPU: absolute Total (bits-over-
Per-page Network speed: last hop (to Memory CPU: threads Local
operation the-wire) page
request end-to-end client device) in count occupied storage
count size
particular
Observations:
1. Disappointingly, in the era of 5G, end-to-end network speed remains the performance constraint most likely to be
thought about by developers “often” (44.9%).
Of course, 5G pertains not to end-to-end network but rather (typically) to last-hop speed, which many fewer
developers (30.3%) worry about “often.” This datapoint should serve as tonic against the marketing boasts by
reminding us of the persistently looming burden of the network.
2. However, memory came in a rather close second, with 42% of respondents thinking about memory “often.” We assume
that this number is not dominated by generic fear of bloated, plugin-heavy modern browsers since the question specified
“while building.”
This means that the code respondents are writing is, in the judgment of the developer, in danger of running up
against memory constraints nearly half the time. This seems like a strong signal indicating the complexity of modern
web applications.
We speculate that a significant increase in memory comes from paradigm shifts that aim at code maintainability and
rapid development — viz., the rise of:
• Event-based communication, which has a higher memory burden than direct-invocation spaghetti.
But we do not presently have enough data to know how much higher-level architectural decisions are contributing
to memory worries vs. pure algorithmic work that requires larger heaps (e.g., dynamic programming in cases that are
poorly suited to memoization).
3. Nearly half of respondents also think “sometimes” about per-page request count, CPU utilization, total page size, and
local storage.
The inverse of this is more interesting — a fifth of developers never think about local storage, total page size
(presumably because of heavy Ajax usage), and last-hop network speed (perhaps because in the age of 5G, this is no
longer easily the narrowest network bottleneck).
4. The most commonly thought about performance constraint overall, however (summing “sometimes” and “often”
responses), is “per-page request count,” by a negligible margin (87.4% vs. 87.2% for network speed).
This is easy to measure, often possible to control, and relatively easy to build cheating epicycles around (e.g., with UX
Note: Respondents wrote in many important web performance constraints that we intend to include in future surveys,
including cold start, cross-browser compatibility, DOM updates after the model changes, DOM traversals, huge number of
users, and request size.
Future Research
This write-up focuses on higher-level web application design and architecture. We also asked a number of lower-level
questions about usage of specific web standards and modern JavaScript constructs, including WebAssembly, Proxy objects,
object destructuring, server push, ES6 class syntax, TypeScript, nullish coalescence, async/await, web components, and more.
We also inquired into developers’ subjective evaluations of the state of modern web development, including:
• Whether current web applications are more complex than they need to be.
These results will be published in future articles on DZone.com. Readers that are interested in examining unpublished
responses before we complete our analysis should contact us at [email protected] with a brief description of intended
research. Note that we did not collect detailed data on framework usage; for this work, see the excellent State of JS survey
(unaffiliated).
John Esposito works as technical architect at 6st Technologies, teaches undergrads whenever they will
listen, and moonlights as research analyst at DZone.com. He wrote his first C in junior high and is finally
starting to understand JavaScript NaN%. When he isn’t annoyed at code written by his past self, John
hangs out with his wife and cats Gilgamesh and Behemoth, who look and act like their names.
Leaders in Modern
Web Development
By Lindsay Smith, Publications Manager at DZone
In these conversations, we discussed research findings and trends, and sought advice for the developer community.
⊲ Explore TypeScript. “In recent years TypeScript has brought OO — and type safety — in JavaScript to a whole new level.”
⊲ Isolate. “The easier it is for us to isolate and develop different bits of functionality, the better.”
⊲ Improve performance with native image technology. If you’re willing to give configuration upfront and handhold the
compiler, applications with native images can start up in 10s of milliseconds, instead of seconds or minutes.
⊲ Make applications more isomorphic. This will allow applications to adapt and make the right adjustments on different
devices, etc.
⊲ Explore IoT and embedded software. “The more we can do to support IoT use cases to get it right, the possibilities are
endless.”
Based on your experience, what is the most pressing web development challenge(s) facing
developers today? And how do you recommend tackling that/those challenge(s)?
Josh: Today, we have to make applications more isomorphic so that they do the right thing on different devices, different
clients, different browsers, etc. We have a ways to go before web development is accessible — it’s still a little obscure.
In terms of state management, one of the really interesting problems that people face when building applications — whether
on the server-side or the client-side — is how you manage the application state. These are all new paradigms for the JavaScript
environment.
Server-side state management is everything. The way we architect our system has changed to accommodate the reality that
we’re no longer building monolithic applications. And the reason we’re no longer building monolithic applications is that
typically, larger applications have larger codebases, which require more integration, more stabilization, more synchronization
for the team. And human beings in that process are the bottleneck. So we moved to this architecture that allows us to have
different teams of people working and contributing to different parts of the codebase. That creates complications and moving
parts of the system that may fail; it creates a distributed system. So the thing that we can do is to understand that that
complexity exists, and then work around it.
Then you can embrace things like microservices. And that’s not easy, but it still beats the alternative, which is trying to be
agile with a larger and growing monolithic codebase. At some point, you can’t just have one app that does everything. Other
organizations break these things apart. And even within an application, you have different islands of functionality. The easier it
is for us to isolate and develop different bits of functionality, the better.
Our research proved that developers whose primary programming language is Java are more likely
to write object-oriented (OO) JavaScript than developers whose primary language is JS.
Additionally, Java developers reported writing 39.6% of JS in an object-oriented paradigm over the
course of their careers and 42.4% of JS in an OO paradigm within the past year. Is this surprising?
What does this tell you about the modern web dev space?
James: This makes a lot of sense. As someone who started their career writing Java applications before moving to creating
web-based applications in JavaScript, there’s always been an urge to stick with OO concepts. For me that started with using
Backbone.js, but in recent years TypeScript has brought OO — and type safety — in JavaScript to a whole new level. I think
most developers with a Java background will find themselves more comfortable with TypeScript rather than “plain JavaScript.”
What’s your advice to developers managing server-side app performance and caching
mechanisms? What are the most important considerations?
At the beginning of the Java ecosystem, one thing that we did to sort of make life easier was that we tried and make it so that
we could run as many different applications as possible on the same virtual machine using these things called “application
servers.” And that meant that we ended up inviting all of these pains having to do with isolating different dropouts. And that
turned out to be a nightmare.
We don’t need the application server; let’s just start up our own apps. And that happens to fit better with the Docker-container
perspective as well: the idea that I’m going to deploy server-side applications and isolated Linux container images. It was a nice,
natural change.
Then the question over the last two or three years has been around this thing called GraalVM. GraalVM is a lot of things. But
one particular aspect that’s been very interesting to people is that there’s a particular utility within GraalVM called the native
image builder. The native image builder will take your Java bytecode and compile it into a native image. Basically, it looks at all
the things that you have in your code and eliminates the possibility that anything will be introduced after compilation, which,
by the way, takes away all the fun. Java is a very dynamic language; we can do a crazy number of things after the application
has been compiled. GraalVM wants to foreclose upon all of those opportunities and diminish that possibility for all of them.
GraalVM is what we in the highfalutin framework business call a “party pooper”. It doesn’t like all the dynamic behavior. But
if you can tell it about all this dynamic behavior upfront, it’ll do it. So your code still works; it’s just not doing it completely
randomly, without any upfront a priori knowledge.
Our research proved that JavaScript code has become more object-oriented over time. What does
this mean for the future of development? Specifically, for Java developers?
James: I believe that the bar for the full-stack developer has been a tricky one. On one hand, coming from the front-end, you
have Node.js becoming the runtime of choice for many backends. On the other hand, with more object-oriented concepts in
JavaScript now, it’s easier for the Java developer to get to grips with JavaScript. It’s also true that more JavaScript developers
will be “happy” to use Java now. It really opens up career opportunities for both Java and JavaScript developers.
When looking at the future for web development, are there any important advances we’ve missed?
What do you think is the most important thing to keep in mind over the next 6-12 months?
Josh: IoT is a big thing, but it’s also just not. When we talk about Android and iOS, we’re not talking about the same APIs we
used to when writing embedded software. And that’s the new client that comes to my mind; there’s no current plan. There
may not be a user interface. But that’s a client, right? Maybe it’s the military with a new server, new clients, etc.
And those who we might talk to for that particular platform can get themselves deployed in a lot of scenarios where, as a
server-side application developer or client-side application developer, we wouldn’t normally go. I think that the more we can do
to support IoT use cases to get it right, the more possibilities are endless there.
Lindsay is a Publications Manager at DZone. Reviewing contributor drafts, working with sponsors,
and interviewing key players for “Leaders in Tech,” Lindsay and team oversees the entire Trend Report
process end to end, delivering insightful content and findings to DZone’s developer audience. In her
free time, Lindsay enjoys reading, biking, and walking her dog, Scout.
Bringing JavaScript
Into the Modern Age
How ES6 Addresses the Shortfalls of Older JavaScript
At its inception over 25 years ago, JavaScript was an effective but cryptic language. Since then, the ECMAScript 6 (ES6) standard
has morphed JavaScript into a more succinct and descriptive language suited for today’s software development. In this article,
we will cover some of the major improvements ushered in by ES6 — classes, the arrow operator, iterators, and modules — as
well as a trip down memory lane to put into perspective how far JavaScript has come since its infancy.
Classes
One of the strongest characteristics a language can have is clarity: The ability of a language to describe exactly what it is trying
to accomplish can alleviate a large burden on a developer. Unfortunately, prior to ES6, classes in JavaScript were far from
intuitive. In ES5 and earlier, we were forced to use a combination of functions and prototypes:
Car.prototype.drive = function(destination) {
console.log(`Driving ${this.make}
${this.model} to ${destination}`);
}
The Car function defines the constructor of our Car class. Within the constructor, we set the value of our Car fields using the
this keyword. The value of this changes based on the context in which it is used. If used at the root scope, it points to the
current window object (or the global context in NodeJS):
console.log(this);
[object Window] {
...
}
In the context of our constructor, this takes on the value of the current object. As we will see shortly, the current object is equal
to the object we are instantiating using the new keyword. Using this, we are able to bind the values passed into the constructor
— namely make and model — to the make and model fields of our Car object, respectively.
We define methods on our Car class using prototypes. Prototypes are templates from which all objects inherit their properties
(fields) and methods. Although prototypes are more complex when used outside of simple classes, we can use the prototype
keyword to define the characteristics of our Car class. Essentially, we are appending a method named drive to our Car class,
which binds to the function on the Right-Hand Side (RHS) of the assignment.
When using the new keyword, the context of this within our constructor function changes to the instantiated object, which
allows us to bind values to the fields of the object. If we execute this snippet, we see:
This process is made drastically easier with the ES6 class syntax. To create the same Car class in ES6, we can write the
following:
class Car {
constructor(make, model) {
this.make = make;
this.model = model;
}
drive(destination) {
console.log(`Driving ${this.make}
${this.model} to ${destination}`);
}
}
On its face, the ES6 approach is much more intuitive: The class keyword defines our Car class, the constructor keyword
defines our constructor, and our drive method is defined within the class context without the need for the prototype or
function keywords.
Arrow Operator
As we saw in our class example, the function keyword enables us to create new functions and methods, but it can be clunky to
use. This is made even worse when trying to pass functional methods (lambda functions) to other functions. For example, we
can convert an array of names into an array of the first letter of each name using:
const names = [
'Justin',
'Ralph',
'Mike'
];
var firstLetters = names.map(function(name) {
return name[0];
});
console.log(firstLetters);
Since the map method requires a functional operator, in ES5 and earlier, we are forced to use the function keyword — along
with the return keyword — to define our method.
With ES6, we can use the arrow operator to define our method (sometimes called the fat arrow since it uses an equals sign for
the body of the arrow rather than a dash). With the arrow operator, we define our parameters on the left side of the arrow and
the return value on the right:
Parameters are defined in a comma-separated list contained in parentheses, but if only one parameter is declared, we can
forgo the parentheses. Likewise, the body of the lambda function is contained in braces, but if there is only one statement and
that statement is the return value, we can forgo both the braces and the return keyword.
Iterators
Prior to ES6, iterating could appear cluttered or unclear. In most cases, we could iterate over an array using one of two
techniques: (1) for loops or (2) the forEach method. Using a for loop, we could create the following:
5
9
2
5
Using ES6 iterators, we can now use the for...of loop to iterate directly over the elements of an array:
This produces the same output as the previous two snippets. While subtle, this ES6 approach removes the need to track
indices, as well as pass a lambda function into a method (forEach), which can be complicated — even with the arrow operator
— as the iteration logic grows.
Modules
While many of the JavaScript examples we see in tutorials and examples are contained in a single file, this is rarely the case in
production environments. With large applications, we tend to disperse code into different modules, grouping pieces by their
functionality. This division means that we must export portions of our code outside of our modules and import them within
other modules.
Prior to ES6, we could export an entity — like a class or variable — car, using:
module.exports = car;
require('/path/to/module/containing/car');
export { car };
This type of import is called a named import since we explicitly supply the name of our export (and import) within braces. We
can also use a default export using the following:
Since we denoted car as our default export, we do not need to explicitly name the imports from our module because car, by
default, will be imported:
Within the importing module, we can now access our car entity as foo. For example, if car were an object of class Car with a
drive method, we can call the drive method using:
foo.drive('New York');
Note that foo is what we are calling our car entity in the current module and this name does not need to match the name of
the default export (i.e., car).
Additionally, we can create as many named exports in a module as we like, but we can only have one default export per module
(since the importing module can only import a single entity without its name).
Conclusion
JavaScript has been one of the most popular web languages for over 25 years, but just as with any other language, it was in
serious need of improvement as time went on. Although there are still many exciting features to come, ES6 has ushered in a
new generation of JavaScript development and promises to alleviate a large portion of the developer burden seen in years past.
Justin Albano is a Software Engineer at Catalogic Software, Inc. responsible for building distributed
catalog, backup, and recovery solutions for Fortune 50 clients, focusing on Spring-based REST API and
MongoDB development. When not working or writing, he can be found practicing Brazilian Jiu-Jitsu,
playing or watching hockey, drawing, or reading.
for connecting the data that powers modern apps and web
hides complexity and allows you to get the data from API or
Frameworks like Jamstack and MACH emerged to help us build modern web experiences — sites that leverage best-of-breed
technology, deliver high performance, and improve the developer experience. However, data volume and complexity are
only increasing, as well as the mandate for developers to deliver engaging experiences more quickly than ever. Below are key
considerations for a solution that helps developers write less code, simplify connecting data, and build and deliver apps faster
than ever.
The “backend for frontend,” or BFF, architecture pattern grew from the need to abstract or decouple front-end (UI) from back-
end considerations. The separation of types (or interfaces) and resolvers (with queries mediating between them — executing
through resolvers and returning data conforming to a type) makes GraphQL an excellent choice to enable decoupling
or abstractions. GraphQL APIs (and the systems used to create them) must balance abstractions with implementations.
Designing an API using GraphQL and tools that leverage constructs like interfaces, routing, and declarative specifications help
developers effectively drive toward the right balance.
GraphQL as a Service
You’ve built it; now you must run it. You can use SaaS to host your APIs and avoid monitoring backends, tracking when
responses and APIs change, and manage API keys or performance degradations. You don’t have to write code to route around
errors, configure caching, handle structure and protocol translations, rate limits, SLAs, etc. And you avoid writing the code to do
all of that for every new backend.
The GraphQL API you build in StepZen is an API of APIs. Different backends and ways of connecting to them are made uniform
in the context of your GraphQL API. You can easily connect any backend through configuration or a few lines of code. You
maintain and manage no servers, and there’s no code to write to parallelize execution, store keys, handle caching, and other
complexities of keeping your endpoint highly available and performant.
The process of building a new application, extending an existing solution, or providing new features should be considered a
journey — a journey that does not end merely when the acceptance criteria is met. Solution architects, software engineers,
and feature developers have a responsibility to deliver features and functionalities that perform well under various loads and
include automated testing in order to protect the integrity of the application as future modifications are required. Failure to
meet these unwritten requirements will certainly lead to undesirable results.
This article will cover performance considerations for application services, content delivery networks (CDNs), and rendering
performance. Lastly, the final section will focus on the importance of automated testing when delivering web applications.
Application Servers
Application servers compliment the web application experience — offering not only the ability to retrieve and persist data,
but to offload complex processing functionality or serve as an integration orchestrator with adjacent services. Modern web
applications often employ one or more microservices utilizing a representational state transfer (RESTful) approach over the
HTTP application layer protocol. The client makes RESTful URI calls using the following methods:
Figure 1
Application services should be considered equally as important as the web client because the service tier will make or break
the overall application experience. Below are five best practices to consider when designing or developing a RESTful service:
@Data
public class Customer {
private long id;
private String name;
private Date created;
private Status status;
…
private List<Order> orders;
}
On the service side, a relationship exists between the customer, the current status of the customer, and a collection of orders.
Application servers often maintain this relationship in a Value Object (VO) design.
However, the web client making the request is simply interested in the Customer object. In this case, a data transfer object
(DTO) is a more concise and better solution:
@Data
Public class CustomerDto {
private long id;
private String name;
private Date created;
private long statusId;
}
Instead of including the Status child object, the statusId is provided with the CustomerDTO object, allowing for use of a URI like
\status\{statusId} to be utilized either via an additional API call or via reference in a state-management solution within the
web client. Accessing orders for a given customer would be handled via an Order API.
A “making every request matter” requires software engineers to analyze the overall cost to return an object that includes
child objects and collections. Considerations should be made to offer alternative entity objects that are geared to maximize
performance. Using the example above, this would mirror the design of the CustomerDTO class.
Figure 2
Content delivery networks should be considered when the web application serves a wide base of users across a geographical
region. In addition to an improvement in static resource load times, the following benefits are also recognized:
• Caching and application packaging at the CDN will yield additional performance improvements.
• The native design of CDNs offers better availability since traffic can be rerouted to another CDN in the event of a single
CDN outage.
• Content acceleration can take things to a new level by allowing content to be pre-fetched by the web client — before the
actual content is needed.
Rendering Performance
GOAL OF 60 FPS PERFORMANCE
In order to prevent an unfavorable performance, a 60 frames per second (FPS) refresh rate should be maintained by the web
application. With 1,000 milliseconds per second, 60 FPS translates to 16.67 milliseconds for each frame. When the browser
requires more time to complete, the term “jank” (or janky) is often used — which is defined as “of unreliable quality.” As a
result, it is common practice to focus on a 10-millisecond response rate to account for any overhead required by the browser to
complete the request.
Figure 3
• JavaScript – represents the client logic being processed, resulting in activity being presented in the user interface.
• Style – handles the computations related to making CSS changes on the client.
• Layout – focuses on the work required by the web client to adjust the screen based upon the changes provided in the
request being processed.
Having an understanding of where underperforming web client requests require the most time in the pixel pipeline yields an
opportunity to streamline the design of the application itself.
Automated Testing
As noted in the introduction, the process of developing a solution should be considered a journey. A journey that does not end
merely when the acceptance criteria is met. Automated testing should always be considered part of this journey and exists in
all tiers of the application:
• Service tier – unit and integration tests are written and automated as part of the build process. The tests cover all custom
logic that exists in the service layer, plus validation of external systems and services via integration tests.
• Client tier – unit tests are written to cover all components, client services, and functionality, which has been written to
meet the needs of the application. These tests are also automated as part of the build process.
With the service and client tier validated as part of the build process, the most important aspect are the end-to-end tests.
2. Conditional Processing – a cross-reference of user functionality available for each role within the application.
Additionally, hierarchical tasks are noted (i.e., task must be submitted before approved) to mimic expected behavior.
3. Test Cases – the end result of scenarios that are performed to validate and note the performance of the application.
Using e2e solutions, response times for each test case can be tracked and analyzed to help determine gaps in expectations at
various points of the application — especially as more test cases are executed in tandem.
Conclusion
In the nearly thirty years I have been building, designing, and supporting applications, I have gained an appreciation toward
the concept of “putting on a production support hat” when designing a new solution.
Focusing on delivering performance-tuned code and providing significant code coverage via unit tests, integration tests,
and end-to-end tests may not be the most-fun area of development — but the time is always justified in the manner of a
performant application.
John Vester is an Information Technology professional with 25+ years expertise in application
development, project management, enterprise integration, and team management. Currently
focusing on enterprise architecture/application design/continuous delivery utilizing object-oriented
programming languages and frameworks. Experience in building Java-based APIs against React and
Angular client frameworks, integrating architecture and design, CRM design and customization, with
additional experience using both C# (.NET Framework) and J2EE (Spring, plus various
other frameworks).
A New Approach to
Microservices
The Evolution of HTTP and JSON
If you want to predict the future, you need to understand the past. And like most things, trends stay on a trajectory, moving
from past to present to future. This allows you to extrapolate the future as a function of three to four points from our past,
assuming you can generalize and find common patterns.
The above is true regardless of what industry you are in. Hence, in order to create a secure future within the software
development industry, we must visit the past. Bear with me, I will try to keep this short and relevant. But first, let’s look at a
couple of trends within our industry over the last decade.
Later, of course, jQuery prevailed, and the entire industry scrapped my ideas for more low-level programming constructs,
arguably based upon creating as much JavaScript as possible. As a funny twist of irony, we’ve now come full circle, and are
back to a place where we no longer need to know JavaScript, CSS, and even HTML for that matter, due to the way Angular,
Material, React, and other similar high-level front-end frameworks are tied together. These emerging frameworks, more or less,
completely eliminate any need-to-know JavaScript, HTML, and CSS. Oh, the irony!
Twenty years ago, if you knew enough JavaScript to give initial focus to a textbox, you were considered a “full-stack developer.”
Today, you need to spend several years learning Angular, Material, and React. Unless you also know everything there is to know
about VueJS, you are hardly a senior developer — regardless of how much experience you have with the alternatives.
The workload has shifted, and the server no longer knows anything about HTML, JavaScript, or CSS. The server is now a “dumb
data server,” exclusively returning data, and hopefully processing this data in some intelligent manner. And of course, JSON is
the new XML — regardless of its lack of typing and obvious inferiority as a data-exchange format.
What the above graph illustrates is not as much which front-end framework seems to be the most popular, but rather that in
2010, we did not really have any front-end frameworks. We did everything by hand, and the server was rendering all the HTML.
AJAX and jQuery were only there to “spice up” our pre-rendered HTML, and the word SPA barely existed.
Preferably, you needed two apps doing the exact same thing — one for Android and another for iOS. Apps, of course, could
not render HTML, CSS, or JavaScript, so we needed a client-agnostic way to handle data. Since JSON, at this point, was already
the new XML, the interface between the client and server became JSON, and the server was “dumbed down” to become
arguably the equivalent of the database server in the previous decades. The advantages in reusability were obvious since no
one could afford to build three different servers since they were already building three different frontends. Hence, the common
denominator became the server, built upon a standardized API, allowing any client to easily communicate with it.
• Be client-agnostic.
If you adopt these three pieces of advice, you simply cannot go wrong. Paradoxically, this brings us toward microservices. Yes, I
used the “Woldemort” word — sorry, I apologize. But I do not want to imply the traditional way of thinking about microservices,
tying everything into SOA, Windows Services, binary serialization protocols, and/or bidirectional socket connections.
Quite the contrary, in fact, I am talking about a new way of thinking about microservices — as standardized components within
your enterprise development efforts, allowing you to solve one problem at a time!
For instance: How many times have you implemented “SendEmail,” “Authenticate,” or “GetTranslation”? If you are anything
like me, you could probably find at least five different implementations of these previously mentioned functions in your
employer’s Git repository.
By creating common services such as those shared above in a more client-agnostic way, using JSON for serialization within a
“dumbed-down HTTP REST Web API,” you could simplify your employer’s code base, probably by an order of magnitude.
Yes, you heard me correctly! Now, this is not because a microservices architecture is a bad idea, but rather because it was sold
to us by the wrong guys, for the wrong reasons, with ulterior motives. There is nothing in a microservices architecture forcing
you to use NServiceBus, Solace, or Kafka in your microservices components. In fact, it is probably infinitely easier to implement
using HTTP, JSON, and Web APIs.
If you did implement microservices using HTTP, JSON, and/or Web APIs, you would then have one endpoint for retrieving
enterprise translations that you could later inject directly into your frontend. And you would then have one endpoint for
sending emails and for every single app in your enterprise.
Additionally, Single Sign-On (SSO) becomes available for all apps, both frontend and backend. At this point, front-end
developers can simply pick and choose from a range of different interoperable components as they tie their frontends
together, further simplifying development efforts, building upon standardized building blocks and components, and providing
interoperability to every imaginable client dreamed of in the future.
Final Advice
Do not create one Git repository; instead, create 500 repositories and make sure everything is built on top of HTTP and JSON.
At least to whatever extent you can. Life is simply too short to implement 15 different authentication mechanisms, enterprise
language translation components, and SMTP wrappers.
Adopting HTTP and JSON as a microservice architecture allows you to reuse the same components across multiple projects,
providing you with the promise of microservices without the hassle of their traditional architecture. There is nothing exclusive
about microservices that requires they be built on top of SOA, bidirectional sockets, or an enterprise service bus for that matter.
Rather, it is a way of thinking: Solve one problem once, and only once!
Thomas has spent most of his professional life researching and developing alternative programming
languages, automating the task of creating software. He currently works in the FinTech industry in
Cyprus, where he in addition to the previously mentioned technologies, are fascinated and working
with cryptography, crypto currencies, and alternative means of payments, based upon cryptographically
securing invocations between servers and clients. He is Head of Development for a medium-sized ForEx
broker, combining this with his own original research.
JavaScript Comparison:
Jamstack vs. Static and
Dynamic Sites
By Anthony Gore, Founder at Vue.js Developers
One of the hottest topics of web development right now is Jamstack — a new architecture for building web apps that promises
to be faster, cheaper, and more secure. What exactly is Jamstack, and how does it differ from the better-known dynamic and
static site architectures? In this article, we’ll compare these to help you decide which is the best for your next web app project.
• Speed: Since static sites avoid the overhead of the web server and on-demand rendering, they can respond quicker. If the
static site is hosted on a CDN where the data is stored on the network edge, additional speed advantage is accrued.
• Price: Dynamic sites need a web server to be listening 24/7 for incoming requests but will typically spend most of that
time idling. Since web servers are usually charged by the minute, you’ll end up paying for capacity you don’t need. Static
file hosting is usually charged by the megabyte, though, so you’ll only pay for what you use. For a low-traffic website, this
means the difference between $50 per month for a web server and $5 per month for a CDN.
• Security: Since there is no server or database, the attack vectors for a static site are drastically reduced.
What Is Jamstack?
Jamstack is a new architecture that attempts to get the best of both worlds from dynamic and static sites, i.e., the speed, low
cost, and security from static sites matched with features from dynamic sites. The name Jamstack will give you a hint as to how
it does this: (J)avaScript, (A)PIs, and (M)arkup. At the heart, Jamstack sites are static sites. The difference is that they’re made
dynamic at runtime by using modern JavaScript and APIs.
For instance, let’s say a user lands on your website landing page. They then want to visit the “Contact” page. Instead of a full-
page refresh, the Contact page content is loaded from an API and dynamically inserted into the page. By avoiding a full page
reload, the new content is available quickly and seamlessly, leading to a superior user experience.
Uses of Jamstack
Unlike static sites, which are limited to blogs and simple information sites, there are more possible use cases for Jamstack sites
as they are able to achieve dynamic content by utilizing runtime rendering. For example, before Jamstack, it would not have
been viable to build an e-commerce store statically. Now, by using a static-site generator (like Eleventy) and integrations with
Stripe, PayPal, and other API economy services, it would be entirely feasible.
Not only does Jamstack make it feasible but also advantageous. A Jamstack e-commerce store could potentially be faster for
users, cheaper to host, and would not have many of the security holes of a dynamic site.
Downsides of Jamstack
While Jamstack is able to attain the benefits of both static and dynamic sites, these benefits do come at a price:
• Complexity: It’s now very easy to create a full-featured web app using server-based frameworks like Rails, WordPress, or
Laravel, which have a convenient module system including almost any conceivable feature. To achieve something similar
with Jamstack may involve a novel solution using a variety of differently designed frameworks and services, each with its
own unique APIs and coding paradigms.
• Immaturity: Many of the technologies involved in Jamstack are still new and don’t have the same level of documentation
and battle-testing as dynamic frameworks.
Conclusion
Given the potential advantages, does this mean all sites should use the Jamstack architecture? Not necessarily. As mentioned,
Jamstack sites may be more complex to build, and most of the benefits only accrue where user experience is crucial. For
example, it wouldn’t make much sense to build an internal web portal with Jamstack. The milliseconds shaved of requests in
this kind of app probably wouldn’t drive any additional benefits for a business. In this case, you’d probably go with a dynamic
site built with Rails or Django. However, this may change in the future as Jamstack evolves.
Anthony Gore is a web developer from Sydney, Australia. He is a Vue Community Partner and the
founder of DevTrends.io.