DOKK Library

Open Source Yearbook 2017

Authors opensource.com

License CC-BY-SA-4.0

Plaintext
                                                             ....................   OPENSOURCE.COM




         Opensource.com publishes stories about creating, adopting, and sharing
         open source solutions. Visit Opensource.com to learn more about how the
         open source way is improving technologies, education, business, government,
         health, law, entertainment, humanitarian efforts, and more.

         Submit a story idea: https://opensource.com/story

         Email us: open@opensource.com

         Chat with us in Freenode IRC: #opensource.com




Open Source Yearbook 2017   . CC BY-SA 4.0 . O
                                             pensource.com                                     3
..............
         .. ... . .                    . . . . .
       .. .. .... A U T O G R A P H S . . .. .. .. . . . . . . .
..............
         .. ... . .                    . . . . .
       .. .. .... A U T O G R A P H S . . .. .. .. . . . . . . .
                             .  ........
                               ... .. ...
 O P E N S O U R C E . C O M .. .. .. ....
                                          . .
W R I T E       F O R         U S
                                      ..............   ..
                                             .............
   7 big reasons to contribute to Opensource.com:
           1    Career benefits: “I probably would not have gotten my most recent job if it had not been for my articles on
                Opensource.com.”



           2     Raise awareness: “The platform and publicity that is available through Opensource.com is extremely
                 valuable.”



           3     Grow your network: “I met a lot of interesting people after that, boosted my blog stats immediately, and
                 even got some business offers!”



           4     Contribute back to open source communities: “Writing for Opensource.com has allowed me to give
                 back to a community of users and developers from whom I have truly benefited for many years.”



           5     Receive free, professional editing services: “The team helps me, through feedback, on improving my
                 writing skills.”



           6     We’re loveable: “I love the Opensource.com team. I have known some of them for years and they are
                 good people.”



           7     Writing for us is easy: “I couldn't have been more pleased with my writing experience.”


   Email us to learn more or to share your feedback about writing for us: https://opensource.com/story
   Visit our Participate page to more about joining in the Opensource.com community: https://opensource.com/participate
   Find our editorial team, moderators, authors, and readers on Freenode IRC at #opensource.com: https://opensource.com/irc




F O L L O W         U S
                              ..............   ..
                                     .............
           Twitter @opensourceway: https://twitter.com/opensourceway
           Google+: https://plus.google.com/+opensourceway
           Facebook: https://www.facebook.com/opensourceway
           Instagram: https://www.instagram.com/opensourceway
           IRC: # opensource.com on Freenode

   All lead images by Opensource.com or the author under CC BY-SA 4.0 unless otherwise noted.



   6                                                                   Open Source Yearbook 2017   . CC BY-SA 4.0 . O
                                                                                                                    pensource.com
FROM THE
                                                         .  ........
                                                           ... .. ...
                                             E D I T O R .. .. .. ....
                                                                      . .

           Dear Open Source Yearbook reader,
           In 2015, Opensource.com published the first Open Source Yearbook [1], and thanks to contributions from
           more than 25 writers, the 2016 edition [2] was even bigger and included more than 100 organizations,
           projects, technologies, and events.
           In the 2017 edition, we offer a pleasing mix of new tech trends and nostalgia. We celebrate 60 years
           of Fortran and 30 years of Perl, and we learn how to run old DOS programs under Linux. We also
           dive into the world of machine learning and AI, the increasingly popular Go programming language
           and the rapidly growing adoption of Kubernetes, and the ongoing challenge of teaching operations to
           software developers.
           Thank you to everyone who contributed to the 2017 Open Source Yearbook [3], and to the communities
           who helped create, document, evangelize, and share open source open source technologies and
           methodologies throughout the year. And a special thanks to the following writers for their contributions:

             • David Both
             • Mike Bursell
             • Ben Cotton
             • Jeremy Garcia
             • Gordon Haff
             • Jim Hall
             • Scott Hirleman
             • Ruth Holloway
             • Elizabeth K. Joseph
             • Jen Kelchner
             • Seth Kenlon
             • Charity Majors
             • Matt Micene
             • Sreejith Omanakuttan
             • Jeff Rouse
             • Don Schenck
             • Amy Unruh
             • Dan Walsh

           Best regards,

           Rikki Endsley
           Opensource.com community manager

           [1] http://opensource.com/yearbook/2015
           [2] https://opensource.com/yearbook/2016
           [3] https://opensource.com/yearbook/2017




Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com                                                         7
CONTENTS
                                                   ..............
                                                            .. ... .
                                                          .. .. .....
W O R K I N G
                        ..............   ..
                               .............
  10	
    Top 5 Linux pain points in 2017 Jeremy Garcia
  	Poor documentation heads the list of Linux user woes to
    date. Here a few other common problem areas.
                                                                    26	
                                                                      Ops: It’s everyone’s job now Charity
                                                                    	The last decade was all about teaching sysadmins to write
                                                                      code. The next challenge will be teaching operations to


  1 1	How Linux containers have evolved
                                                                      software developers.


                                                                    28	Wcloud-native
                                                  Daniel Walsh
  	Containers have come a long way in the past few years. We              hy open source should be the first choice for
    walk through the timeline.                                                        environments         Elizabeth K. Joseph


  18	
                                                                    	For the same reasons Linux beat out proprietary software,
    The changing face of the hybrid cloud Gordon Haff                 open source should be the first choice for cloud-native
  	Terms and concepts around cloud computing are still new,          environments.


                                                                    3 1 	
    but evolving.


  20	1environment
                                                                      What’s the point of DevOps? Matt Micene
        1 reasons to use the GNOME 3 desktop                        	True organizational culture change helps you bridge the
                    for Linux      David Both                         gaps you thought were uncrossable.


                                                                    34	The politics of the Linux desktop
  	The GNOME 3 desktop was designed with the goals of
    being simple, easy to access, and reliable. GNOME’s                                                         Mike Bursell
    popularity attests to the achievement of those goals.           	If you’re working in open source, why would you use


  22	7 cool KDE tweaks that will change your life
                                                                      anything but Linux as your main desktop?

    Seth Kenlon
  	KDE’s Plasma desktop offers a ton of options to customize       36	10 open source technology trends for 2018
                                                                      Sreejith Omanakuttan
    your environment for the way you work. Here are seven to        	What do you think will be the next open source tech trends?
    check out.                                                        Here are 10 predictions.


  25	Wopen
         hich technologies are poised to take over in
             source?     Scott Hirleman                             39	Kdominated
                                                                          ubernetes, standardization, and security
                                                                                   2017 Linux container news         Gordon Haff
  	These technologies are quickly gaining ground on open           	We round up our most popular Linux container reads from
    source stalwarts, creating opportunities for people who           the past year.
    become proficient in them.




COLL ABORATING
                                   ..............   ..
                                          .............
  47	Creative Commons: 1.2 billion strong and growing
    Ben Cotton
                                                                        Best Trio of 2017
  	Creative Commons shares 2016 State of the Commons
    report, and here are a few highlights.                                   SpamAssassin,
  48	2contributions
        4 Pull Requests challenge encourages fruitful
                         Ben Cotton
                                                                             MIMEDefang, and Procmail
                                                                                                                   DAVID BOTH
  	16,720 pull requests were opened. Of those, 10,327 were
    merged and 1,240 were closed.                                                 annual “Best Couple” award has
                                                                    42      Our
                                                                           	

  49	Openness is key to working with Gen Z
    Jen Kelchner
                                                                             expanded to a trio of applications that
                                                                             combine to manage server-side email
  	Members of Generation Z operate openly by default. Are                   sorting beautifully.
    you ready to work with them?




  8                                                              Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
LEARNING
                    ..............   ..
                           .............
  51	5 big ways AI is rapidly invading our lives
    Rikki Endsley                                                         60	Introduction to the Domain Name System (DNS)
                                                                            David Both
  	Let’s look at five real ways we’re already surrounded by              	Learn how the global DNS system makes it possible for us
    artificial intelligence.                                                to assign memorable names to the worldwide network of


  54	
                                                                            machines we connect to every day.


                                                                          65	Wplatform?
    Getting started with .NET for Linux Don Schenck
  	Microsoft’s decision to make .NET Core open source means                     hat is the TensorFlow machine intelligence
    it’s time for Linux developers to get comfortable and start                             Amy Unruh
    experimenting.                                                        	Learn about the Google-developed open source library for


  57	Why Go is skyrocketing in popularity
                                                                            machine learning and deep neural networks research.


                                                                          72	Is blockchain a security topic?
                                                   Jeff Rouse
  	In only two years, Golang leaped from the 65th most                                                            Mike Bursell
    popular programming language to #17. Here’s what’s                    	Yet again, we need to understand how systems and the
    behind its rapid growth.                                                business work together and be honest about the fit.




CREATING
                    ..............   ..
                           .............
  74	Tartists
        op open source solutions for designers and
               from 2017     Alan Smithee                                 76	How to use Pulse to manage sound on Linux
                                                                            Seth Kenlon
  	We collected popular 2017 Opensource.com articles about               	Learn how audio on Linux works and why you should
    exciting developments in open source solutions for designers            consider Pulse to manage it.
    and artists.




OLD SCHOOL
                           ..............   ..
                                  .............
  80	
    Happy 60th birthday, Fortran Ben Cotton
  	Fortran may be trending down on Google, but its
    foundational role in scientific applications ensure that it
                                                                          86	
                                                                            The origin and evolution of FreeDOS Jim Hall
                                                                          	Or, why a community formed around an open source
                                                                            version of DOS, and how it’s still being used today.


                                                                          89	How to run DOS programs in Linux
    won’t be retiring anytime soon.


  82	Pthrive
                                                                                                                       Jim Hall
        erl turns 30 and its community continues to                       	QEMU and FreeDOS make it easy to run old DOS
                Ruth Holloway                                               programs under Linux.
  	Created for utility and known for its dedicated users, Perl has
    proven staying power. Here’s a brief history of the language
    and a look at some top user groups.




      6     7 Reasons to Write for Us / Follow Us                           93       Call for Papers / Editorial Calendar


  All lead images by Opensource.com or the author under CC BY-SA 4.0 unless otherwise noted.

  Open Source Yearbook 2017       . CC BY-SA 4.0 . O     pensource.com                                                                 9
W O R K I N G
                       ..............   ..
                              .............


  Top 5                                                                Linux pain
                                                                       points in 2017
  Poor documentation heads the list of Linux user woes to date. Here a few other common problem areas.
                                                                                                                  BY JEREMY GARCIA



                                                                       4. Deprecation of 32-bit
  AS I DISCUSSED                     in my 2016 Open Source
                                     Yearbook [1] article on
  troubleshooting tips for the 5 most common Linux issues [2],
                                                                       Many users are lamenting the death of 32-bit support in
                                                                       their favorite distributions and software projects. Although
  Linux installs and operates as expected for most users,              you still have many options if 32-bit support is a must,
  but some inevitably run into problems. How have things               fewer and fewer projects are likely to continue supporting
  changed over the past year in this regard? Once again, I             a platform with decreasing market share and mind share.
  posted the question to LinuxQuestions.org and on social              Luckily, we’re talking about open source, so you’ll likely
  media, and analyzed LQ posting patterns. Here are the                have at least a couple options as long as someone cares
  updated results.                                                     about the platform.

  1. Documentation                                                     5. Deteriorating support and testing for X-forwarding
  Documentation, or lack thereof, was one of the largest pain          Although many longtime and power users of Linux regular-
  points this year. Although open source methodology produc-           ly use X-forwarding and consider it critical functionality, as
  es superior code, the importance of producing quality docu-          Linux becomes more mainstream it appears to be seeing
  mentation has only recently come to the forefront. As more           less testing and support; especially from newer apps. With
  non-technical users adopt Linux and open source software,            Wayland network transparency still evolving, the situation
  the quality and quantity of documentation will become par-           may get worse before it improves.
  amount. If you’ve wanted to contribute to an open source
  project but don’t feel you are technical enough to offer code,       Holdovers—and improvements—from last year
  improving documentation is a great way to participate. Many          Video (specifically, accelerators/?acceleration; the latest vid-
  projects even keep the documentation in their repository,            eo cards; proprietary drivers; and efficient power manage-
  so you can use your contribution to get acclimated to the            ment), Bluetooth support, specific WiFi chips and printers,
  version control workflow.                                            and power management, along with suspend/resume, con-
                                                                       tinue to be troublesome for many users. On a more positive
  2. Software/library version incompatibility                          note, installation, HiDPI, and audio issues were significantly
  If you’ve wanted to contribute to an open source project but         less frequent than they were just a year ago.
  don’t feel you are technical enough to offer code, improving            Linux continues to make tremendous strides, and the con-
  documentation is a great way to participate. I was surprised         stant, almost inexorable cycle of improvement should ensure
  by this one, but software/library version incompatibility was        that continues for years to come. As with any complex piece
  mentioned frequently. The issue appears to be greatly exac-          of software, however, there will always be issues.
  erbated if you’re not running one of the mainstream popular
  distributions. I haven’t personally encountered this problem         Links
  in many years, but the increasing adoption of solutions such         [1] https://opensource.com/yearbook/2016
  as AppImage [3], Flatpak [4], and Snaps leads me to believe          [2] https://opensource.com/article/17/1/yearbook-linux-​
  there may indeed be something to this one.                                troubleshooting-tips
                                                                       [3] https://appimage.org/
  3. UEFI and secure boot                                              [4] http://flatpak.org/
  Although this issue continues to improve as more support-
  ed hardware is deployed, many users indicate that they still         Author
  have issues with UEFI and/or secure boot. Using a distribu-          Jeremy Garcia is the founder of LinuxQuestions.org and an
  tion that fully supports UEFI/secure boot out of the box is the      ardent but realistic open source advocate. Follow Jeremy on
  best solution here.                                                  Twitter: @linuxquestions



  10                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
                                                                                    ....................        W O R K I N G




How Linux containers
have evolved                                                                             BY DANIEL WALSH


Containers have come a long way in the past few years. We walk through the timeline.



IN THE PAST                  few years, containers have be-
                             come a hot topic among not just
developers, but also enterprises. This growing interest has
                                                                   example of starting a container with the Docker client
                                                                   (docker-cli) and the layers of calls before the container
                                                                   process (pid1OfContainer) is started:
caused an increased need for security improvements and
hardening, and preparing for scalability and interoperability.     docker-cli→docker-daemon→libvirt-lxc→pid1OfContainer
This has necessitated a lot of engineering, and here’s the
story of how much of that engineering has happened at an           I did not like the idea of having two daemons between your
enterprise level at Red Hat.                                       tool to launch containers and the final running container.
   When I first met up with representatives from Docker Inc.          My team worked hard with the upstream docker develop-
(Docker.io) in the fall of 2013, we were looking at how to         ers on a native Go programming language [7] implementa-
make Red Hat Enterprise Linux (RHEL) use Docker con-               tion of the container runtime, called libcontainer [8]. This li-
tainers. (Part of the Docker project has since been rebrand-       brary eventually got released as the initial implementation of
ed as Moby.) We had several problems getting this technol-         the OCI Runtime Specification [9] along with runc.
ogy into RHEL. The first big hurdle was getting a supported
Copy On Write (COW) file system to handle container im-            docker-cli→docker-daemon @ pid1OfContainer
age layering. Red Hat ended up contributing a few COW
implementations, including Device Mapper [1], btrfs [2], and       Although most people mistakenly think that when they exe-
the first version of OverlayFS [3]. For RHEL, we defaulted         cute a container, the container process is a child of the dock-
to Device Mapper, although we are getting a lot closer on          er-cli, they actually have executed a client/server operation
OverlayFS support.                                                                                   and the container process
   The next major hurdle was                                                                         is running as a child of a to-
on the tooling to launch the con-                                                                    tally separate environment.
tainer. At that time, upstream                                                                       This client/server opera-
docker was using LXC [4] tools                                                                       tion can lead to instability
for launching containers, and                                                                        and potential security con-
we did not want to support                                                                           cerns, and it blocks use-
LXC tools set in RHEL. Pri-                                                                          ful features. For example,
or to working with upstream                                                                          systemd [10] has a feature
docker, I had been working                                                                           called socket activation,
with the libvirt [5] team on a                                                                       where you can set up a
tool called virt-sandbox [6],                                                                        daemon to run only when
which used libvirt-lxc for                                                                           a process connects to a
launching containers.                                                                                socket. This means your
   At the time, some people at Red Hat thought swap-               system uses less memory and only has services executing
ping out the LXC tools and adding a bridge so the Docker           when they are needed. The way socket activation works is
daemon would communicate with libvirt using libvirt-lxc            systemd listens at a TCP socket, and when a packet arrives
to launch containers was a good idea. There were seri-             for the socket, systemd activates the service that normally
ous concerns with this approach. Consider the following            listens on the socket. Once the service is activated, systemd



Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                11
W O R K I N G
                       ..............   ..
                              .............
  hands the socket to the newly started daemon. Moving this            Docker 1.12 added a container daemon to launch con-
  daemon into a Docker-based container causes issues. The              tainers. Its main purpose was to satisfy Docker Swarm (a
  unit file would start the container using the Docker CLI and         Kubernetes competitor):
  there was no easy way for systemd to pass the connected
  socket to the Docker daemon through the Docker CLI.                  kubelet→ dockerdaemon→containerd @runc @ pid1
    Problems like this made us realize that we needed alter-
  nate ways to run containers.                                         As was stated previously, every Docker release has bro-
                                                                       ken Kubernetes functionality, which is why Kubernetes and
  The container orchestration problem                                  OpenShift require us to ship older versions of Docker for
  The upstream docker project made using containers easy,              their workloads.
  and it continues to be a great tool for learning about Linux           Now we have a three-daemon system, where if anything
  containers. You can quickly experience launching a container         goes wrong on any of the daemons, the entire house of
  by running a simple command like docker run -ti fedora sh            cards falls apart.
  and instantly you are in a container.
      The real power of containers comes about when you start          Toward container standardization
  to run many containers simultaneously and hook them to-
  gether into a more powerful application. The problem with            CoreOS, rkt, and the alternate runtime
  setting up a multi-container application is the complexity           Due to the issues with the Docker runtime, several organi-
  quickly grows and wiring it up using simple Docker com-              zations were looking at alternative runtimes. One such or-
  mands falls apart. How do you manage the placement or                ganization was CoreOS. CoreOS had offered an alternative
  orchestration of container applications across a cluster of          container runtime to upstream docker, called rkt (rocket).
  nodes with limited resources? How does one manage their              They also introduced a standard container specification
  lifecycle, and so on?                                                called appc (App Container). Basically, they wanted to get
      At the first DockerCon, at least seven different companies/      everyone to use a standard specification for how you store
  open source projects showed how you could orchestrate                applications in a container image bundle.
  containers. Red Hat’s OpenShift [11] had a project called               This threw up red flags. When I first started working on
  geard [12], loosely based on OpenShift v2 containers (called         containers with upstream docker, my biggest fear is that
  “gears”), which we were demonstrating. Red Hat decided               we would end up with multiple specifications. I did not want
  that we needed to re-look at orchestration and maybe part-           an RPM vs. Debian-like war to affect the next 20 years of
  ner with others in the open source community.                        shipping Linux software. One good outcome from the appc
      Google was demonstrating Kubernetes [13] container or-           introduction was that it convinced upstream docker to work
  chestration based on all of the knowledge Google had devel-          with the open source community to create a standards body
  oped in orchestrating their own internal architecture. Open-         called the Open Container Initiative (OCI) [16].
  Shift decided to drop our Gear project and start working with           The OCI has been working on two specifications:
  Google on Kubernetes. Kubernetes is now one of the largest
  community projects on GitHub.                                        OCI Runtime Specification [17]: The OCI Runtime Spec-
                                                                       ification “aims to specify the configuration, execution envi-
  Kubernetes                                                           ronment, and lifecycle of a container.” It defines what a con-
  Kubernetes was developed to use Google’s lmctfy [14] con-            tainer looks like on disk, the JSON file that describes the
  tainer runtime. Lmctfy was ported to work with Docker during         application(s) that will run within the container, and how to
  the summer of 2014. Kubernetes runs a daemon on each                 spawn and execute the container. Upstream docker contrib-
  node in the Kubernetes cluster called a kubelet [15]. This           uted the libcontainer work and built runc as a default imple-
  means the original Kubernetes with Docker 1.8 workflow               mentation of the OCI Runtime Specification.
  looked something like:
                                                                       OCI Image Format Specification [18]: The Image Format
  kubelet→dockerdaemon @ PID1                                          Specification is based mainly on the upstream docker im-
                                                                       age format and defines the actual container image bundle
  Back to the two-daemon system.                                       that sits at container registries. This specification allows
    But it gets worse. With every release of Docker, Kuberne-          application developers to standardize on a single format
  tes broke.Docker 1.10 Switched the backing store causing             for their applications. Some of the ideas described in appc,
  a rebuilding of all images.Docker 1.11 started using runc to         although it still exists, have been added to the OCI Image
  launch containers:                                                   Format Specification. Both of these OCI specifications are
                                                                       nearing 1.0 release. Upstream docker has agreed to sup-
  kubelet→dockerdaemon @ runc @PID1                                    port the OCI Image Specification once it is finalized. Rkt



  12                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
now supports running OCI images as well as traditional up-         pulling and pushing images, instead of using the upstream
stream docker images.                                              docker daemon.

The Open Container Initiative, by providing a place for the        Containers/image
industry to standardize around the container image and the         We had been talking to CoreOS about potentially using sko-
runtime, has helped free up innovation in the areas of tooling     peo with rkt, and they said that they did not want to exec out
and orchestration.                                                 to a helper application, but would consider using the library
                                                                   that skopeo used. We decided to split skopeo apart into a
Abstracting the runtime interface                                  library and executable and created image [23].
One of the innovations taking advantage of this standardiza-          The containers/image [24] library and skopeo are used
tion is in the area of Kubernetes orchestration. As a big sup-     in several other upstream projects and cloud infrastructure
porter of the Kubernetes effort, CoreOS submitted a bunch          tools. Skopeo and containers/image have evolved to sup-
of patches to Kubernetes to add support for communicating          port multiple storage backends in addition to Docker, and
and running containers via rkt in addition to the upstream         it has the ability to move container images between con-
docker engine. Google and upstream Kubernetes saw that             tainer registries and many cool features. A nice thing about
adding these patches and possibly adding new container             skopeo [25] is it does not require any daemons to do its job.
runtime interfaces in the future was going to complicate the       The breakout of containers/image library has also allowed us
Kubernetes code too much. The upstream Kubernetes team             to add enhancements such as container image signing [26].
decided to implement an API protocol specification called the
Container Runtime Interface (CRI). Then they would rework          Innovations in image handling and scanning
Kubernetes to call into CRI rather than to the Docker engine,      I mentioned the atomic CLI command earlier in this article.
so anyone who wants to build a container runtime interface         We built this tool to add features to containers that did not
could just implement the server side of the CRI and they           fit in with the Docker CLI, and things that we did not feel we
could support Kubernetes. Upstream Kubernetes created a            could get into the upstream docker. We also wanted to allow
large test suite for CRI developers to test against to prove       flexibility to support additional container runtimes, tools, and
they could service Kubernetes. There is an ongoing effort to       storage as they developed. Skopeo is an example of this.
remove all of Docker-engine calls from Kubernetes and put              One feature we wanted to add to atomic was atomic
them behind a shim called the docker-shim.                         mount. Basically, we wanted to take content that was stored
                                                                   in the Docker image store (upstream docker calls this a graph
Innovations in container tooling                                   driver), and mount the image somewhere, so that tools could
                                                                   examine the image. Currently if you use upstream docker,
Container registry innovations with skopeo                         the only way to look at an image is to start the container.
A few years ago, we were working with the Project Atom-            If you have untrusted content, executing code inside of the
ic team on the atomic CLI [19]. We wanted the ability to           container to look at the image could be dangerous. The sec-
examine a container image when it sat on a container reg-          ond problem with examining an image by starting it is that
istry. At that time, the only way to look at the JSON data         the tools to examine the container are probably not in the
associated with a container image at a container registry          container image.
was to pull the image to the local server and then you                 Most container image scanners seem to have the follow-
could use docker inspect to read the JSON files. These             ing pattern: They connect to the Docker socket, do a docker
images can be huge, up to multiple gigabytes. Because              save to create a tarball, then explode the tarball on disk, and
we wanted to allow users to examine the images and de-             finally examine the contents. This is a slow operation.
cide not to pull them, we wanted to add a new --remote in-             With atomic mount, we wanted to go into the Docker
terface to docker inspect. Upstream docker rejected the            graph driver and mount the image. If the Docker daemon
pull request, telling us that they did not want to complicate      was using device mapper, we would mount the device. If it
the Docker CLI, and that we could easily build our own             was using overlay, we would mount the overlay. This is an
tooling to do the same.                                            incredibly quick operation and satisfies our needs. You can
   My team, led by Antonio Murdaca [20], ran with the idea         now do:
and created skopeo [21]. Antonio did not stop at just pulling
the JSON file associated with the image—he decided to im-          # atomic mount fedora /mnt
plement the entire protocol for pulling and pushing container      # cd /mnt
images from container registries to/from the local host.
   Skopeo is now used heavily within the atomic CLI for            And start examining the content. When you are done, do a:
things such as checking for new updates for containers
and inside of atomic scan [22]. Atomic also uses skopeo for        # atomic umount /mnt




Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                13
W O R K I N G
                       ..............   ..
                              .............
  We use this feature inside of atomic scan, which allows you          1. OCI Image Format Specification defines the standard
  to have some of the fastest container scanners around.                   image format for images stored at container registries.
                                                                       2. C ontainers/image is the library that implements all fea-
  Issues with tool coordination                                            tures needed to pull a container image from a container
  One big problem is that atomic mount is doing this under                 registry to a container host.
  the covers. The Docker daemon does not know that another             3. Containers/storage provides a library to exploding OCI
  process is using the image. This could cause problems (for               Image Formats onto COW storage and allows you to work
  example, if you mounted the Fedora image above and then                  with the image.
  someone went and executed docker rmi fedora, the Docker              4. OCI Runtime Specification and runc provide tools for
  daemon would fail weirdly when trying to remove the Fedora               executing the containers (the same tool that the Docker
  image saying it was busy). The Docker daemon could get                   daemon uses for running containers).
  into a weird state.
                                                                       This means we can use these tools to implement the ability
  Containers storage                                                   to use containers without requiring a big container daemon.
  To solve this issue, we started looking at pulling the graph         In a moderate- to large-scale DevOps-based CI/CD environ-
  driver code out of the upstream docker daemon into its               ment, efficiency, speed, and security are important. And as
  own repository. The Docker daemon did all of its locking             long as your tools conform to the OCI specifications, then a
  in memory for the graph driver. We wanted to move this               developer or an operator should be using the best tools for
  locking into the file system so that we could have mul-              automation through the CI/CD pipeline and into production.
  tiple distinct processes able to manipulate the container            Most of the container tooling is hidden beneath orchestration
  storage at the same time, without having to go through a             or higher-up container platform technology. We envision a
  single daemon process.                                               time in which runtime or image bundle tool selection perhaps
     We created a project called container/storage [27], which         becomes an installation option of the container platform.
  can do all of the COW features required for running, building,
  and storing containers, without requiring one process to con-        System (standalone) containers
  trol and monitor it (i.e., no daemon required). Now skopeo           On Project Atomic we introduced the atomic host, a new
  and other tools and projects can take advantage of the stor-         way of building an operating system in which the software
  age. Other open source projects have begun to use contain-           can be “atomically” updated and most of the applications that
  ers/storage, and at some point we would like to merge this           run on it will be run as containers. Our goal with this platform
  project back into the upstream docker project.                       is to prove that most software can be shipped in the future in
                                                                       OCI Image Format, and use standard protocols to get imag-
  Undock and let’s innovate                                            es from container registries and install them on your system.
  If you think about what happens when Kubernetes runs a               Providing software as container images allows you to update
  container on a node with the Docker daemon, first Kuberne-           the host operating system at a different pace than the appli-
  tes executes a command like:                                         cations that run on it. The traditional RPM/yum/DNF way of
                                                                       distributing packages locks the applications to the live cycle
  kubelet run nginx image=nginx                                        of the host operating systems.
                                                                          One problem we see with shipping most of the infrastruc-
  This command tells the kubelet to run the NGINX application          ture as containers is that sometimes you must run an ap-
  on the node. The kubelet calls into the CRI and asks it to           plication before the container runtime daemon is executing.
  start the NGINX application. At this point, the container run-       Let’s look at our Kubernetes example running with the Dock-
  time that implemented the CRI must do the following steps:           er daemon: Kubernetes requires a network to be set up so
                                                                       that it can put its pods/containers into isolated networks. The
  1. Check local storage for a container named nginx. If not         default daemon we use for this currently is flanneld [28],
      local, the container runtime will search for a standardized      which must be running before the Docker daemon is started
      container image at a container registry.                         in order to hand the Docker daemon the network interfaces
  2. If the image is not in local storage, download it from the       to run the Kubernetes pods. Also, flanneld uses etcd [29]
      container registry to the local system.                          for its data store. This daemon is required to be run before
  3. Explode the the download container image on top of con-          flanneld is started.
      tainer storage—usually a COW storage—and mount it up.               If we want to ship etcd and flanneld as container images,
  4. Execute the container using a standardized container             we have a chicken-and-egg situation. We need the contain-
      runtime.                                                         er runtime daemon to start the containerized applications,
                                                                       but these applications need to be running before the con-
  Let’s look at the features described above:                          tainer runtime daemon is started. I have seen several hacky



  14                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
setups to try to handle this situation, but none of them are         of innovation on mechanisms to build container images. A
clean. Also, the Docker daemon currently has no decent               container image is nothing more than a tarball of tarballs
way to configure the priority order that containers start. I         and some JSON files. The base image of a container is a
have seen suggestions on this, but they all look like the old        rootfs along with an JSON file describing the base image.
SysVInit way of starting services (and we know the com-              Then as you add layers, the difference between the layers
plexities that caused).                                              gets tar d up along with changes to the JSON file. These
                                                                     layers and the base file get tar’d up together to form the
systemd                                                              container image.
One reason for replacing SysVInit with systemd was to                   Almost everyone is building with the docker build and the
handle the priority and ordering of starting services, so why        Dockerfile format. Upstream docker stopped accepting pull
not take advantage of this technology? In Project Atomic,            requests to modify or improve Dockerfile format and builds a
we decided that we wanted to run containers on the host              couple of years ago. The Dockerfile played an important part
without requiring a container runtime daemon, especially             in the evolution of containers. Developers or administrators/
for early boot. We enhanced the atomic CLI to allow you              operators could build containers in a simple and straightfor-
to install container images. If you execute atomic install           ward manner; however, in my opinion, the Dockerfile is really
--system etcd, it uses skopeo to go out to a container reg-          just a poor man s bash script and creates several problems
istries and pulls down the etcd OCI Image. Then it explodes          that have never been solved. For example:
(or expands) the image onto an OSTree backing store. Be-
cause we are running etcd in production, we treat the image          • T o build a container image, Dockerfile requires a Docker
as read-only. Next the atomic command grabs the systemd                 daemon to be running.
unit file template from the container image and creates a                • No one has built standard tooling to create the OCI im-
unit file on disk to start the image. The unit file actually uses           age outside of executing Docker commands.
runc to start the container on the host (although runc is not            • Even tools such as ansible-containers and OpenShift
necessary).                                                                 S2I (Source2Image) use docker-engine under the
   Similar things happen if you execute atomic install --sys-               covers.
tem flanneld, except this time the flanneld unit file specifies      • Each line in a Dockerfile creates a new image, which helps
that it needs etcd unit running before it starts.                       in the development process of creating the container be-
   When the system boots up, systemd ensures that etcd                  cause the tooling is smart enough to know that the lines in
is running before flanneld, and that the container runtime              the Dockerfile have not changed, so the existing images
is not started until after flanneld is started. This allows you         can be used and the lines do not need to be reprocessed.
to move the Docker daemon and Kubernetes into system                    This can lead to a huge number of layers.
containers. This means you can boot up an atomic host or a               • Because of these, several people have requested mech-
traditional rpm-based operating system that runs the entire                 anisms to squash the images eliminating the layers. I
container orchestration stack as containers. This is power-                 think upstream docker finally has accepted something
ful because we know customers want to continue to patch                     to satisfy the need.
their container hosts independently of these components.             • To pull content from secured sites to put into your container
Furthermore, it keeps the host’s operating system footprint             image, often you need some form of secrets. For example,
to a minimum.                                                           you need access to the RHEL certificates and subscrip-
   There even has been discussion about putting traditional             tions in order to add RHEL content to an image.
applications into containers that can run either as standalone/          • These secrets can end up in layers stored in the image.
system containers or as an orchestrated container. Consider                 And the developer needs to jump through hoops to re-
an Apache container that you could install with the atom-                   move the secrets.
ic install --system httpd command. This container image                  • To allow volumes to be mounted in during Docker build,
would be started the same way you start an rpm-based httpd                  we have added a -v volume switch to the projectatomic/
service (systemctl start httpd except httpd will be started                 docker package that we ship, but upstream docker has
in a container). The storage could be local, meaning /var/                  not accepted these patches.
www from the host gets mounted into the container, and the           • Build artifacts end up inside of the container image. So
container listens on the local network at port 80. This shows           although Dockerfiles are great for getting started or build-
that you could run traditional workloads on a host inside of            ing containers on a laptop while trying to understand the
a container without requiring a container runtime daemon.               image you may want to build, they really are not an effec-
                                                                        tive or efficient means to build images in a high-scaled en-
Building container images                                               terprise environment. And behind an automated container
From my perspective, one of the saddest things about con-               platform, you shouldn’t care if you are using a more effi-
tainer innovation over the past four years has been the lack            cient means to build OCI-compliant images.



Open Source Yearbook 2017      . CC BY-SA 4.0 . O    pensource.com                                                                15
W O R K I N G
                       ..............   ..
                              .............
  Undock with Buildah                                                     • B uildah run is supported, but instead of relying on a
  At DevConf.cz 2017, I asked Nalin Dahyabhai [30] on my                     container runtime daemon, buildah executes runc to
  team to look at building what I called containers-coreuti-                 run the command inside of a locked down container.
  ls, basically, to use the containers/storage and containers/         • buildah build-using-dockerfile -f Dockerfile .:
  image libraries and build a series of command-line tools that               We want to move tools like ansible-containers and
  could mimic the syntax of the Dockerfile. Nalin decided to              OpenShift S2I to use buildah rather than requiring a con-
  call it buildah [31], making fun of my Boston accent. With a            tainer runtime daemon.
  few buildah primitives, you can build a container image:                    Another big issue with building in the same container
                                                                          runtime that is used to run containers in production is
                                                                          that you end up with the lowest common denominator
  • O ne of the main concepts of security is to keep the amount          when it comes to security. Building containers tends to
     of content inside of an OS image as small as possible to             require a lot more privileges than running containers.
     eliminate unwanted tools. The idea is that a hacker might            For example, we allow the mknod capability by default.
     need tools to break through an application, and if the tools         The mknod capability allows processes to create device
     such as gcc, make, dnf are not present, the attacker can             nodes. Some package installs attempt to create device
     be stopped or confined.                                              nodes, yet in production almost no applications do. Re-
  • Because these images are being pulled and pushed over                moving the mknod capability from your containers in
     the internet, shrinking the size of the container is always          production would make your systems more secure.
     a good idea.                                                             Another example is that we default container images to
  • How Docker build works is commands to install software               read/write because the install process means writing pack-
     or compile software have to be in the uildroot of the                ages to /usr. Yet in production, I argue that you really should
     container.                                                           run all of your containers in read-only mode. Only allow the
  • Executing the run command requires all of the execut-                containers to write to tmpfs or directories that have been
     ables to be inside of the container image. Just using dnf            volume mounted into the container. By splitting the running
     inside of the container image requires that the entire Py-           of containers from the building, we could change the defaults
     thon stack be present, even if you never use Python in               and make for a much more secure environment.
     the application.                                                     • And yes, buildah can build a container image using a
  • ctr=$(buildah from fedora):                                             Dockerfile.
      • Uses containers/image to pull the Fedora image from a
         container registry.                                           CRI-O a runtime abstraction for Kubernetes
      • Returns a container ID (ctr).                                 Kubernetes added an API to plug in any runtime for the pods
  • mnt=$(buildah mount $ctr):                                        called Container Runtime Interface (CRI). I am not a big fan
      • M ounts up the newly created container image ($ctr).          of having lots of daemons running on my system, but we
      • Returns the path to the mount point.                          have added another. My team led by Mrunal Patel [32] start-
      • You can now use this mount point to write content.            ed working on CRI-O [33] daemon in late 2016. This is a
  • d nf install httpd installroot=$mnt:                              Container Runtime Interface daemon for running OCI-based
      • You can use commands on the host to redirect content          applications. Theoretically, in the future we could compile in
         into the container, which means you can keep your se-         the CRI-O code directly into the kubelet to eliminate the sec-
         crets on the host, you don’t have to put them inside of       ond daemon.
         the container, and your build tools can be kept on the            Unlike other container runtimes, CRI-O’s only purpose in
         host.                                                         life is satisfying Kubernetes’ needs. Remember the steps de-
      • You don’t need dnf inside of the container or the Python      scribed above for what Kubernetes need to run a container.
         stack unless your application is going to use it.                 Kubernetes sends a message to the kubelet that it wants
  • c p foobar $mnt/dir:                                              it to run the NGINX server:
     • You can use any command available in bash to populate
        the container.                                                 1. The kubelet calls out to the CRI-O to tell it to run NGINX.
  • buildah commit $ctr:                                              2. C RI-O answers the CRI request.
      • You can create a layer whenever you decide. You con-          3. C RI-O finds an OCI Image at a container registry.
         trol the layers rather than the tool.                         4. C RI-O uses containers/image to pull the image from the
  • buildah config --env container=oci --entrypoint                       registry to the host.
     /usr/bin/httpd $ctr:                                              5. C RI-O unpacks the image onto local storage using con-
      • All of the commands available inside of Dockerfile can            tainers/storage.
         be specified.                                                 6. C RI-O launches a OCI Runtime Specification, usually
  • buildah run $ctr dnf -y install httpd:                                runc, and starts the container. As I stated previously, the



  16                                                                Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
    Docker daemon launches its containers using runc, in ex-       [10] h ttps://opensource.com/business/15/10/lisa15-interview-
    actly the same way.                                                  alison-chaiken-mentor-graphics
7. If desired, the kubelet could also launch the container us-    [11]	https://www.openshift.com/
    ing an alternate runtime, such as Clear Containers runv.       [12] https://openshift.github.io/geard/
                                                                   [13] https://opensource.com/resources/what-is-kubernetes
CRI-O is intended to be a stable platform for running Kuber-       [14] https://github.com/google/lmctfy
netes, and we will not ship a new version of CRI-O unless          [15] https://kubernetes.io/docs/admin/kubelet/
it passes the entire Kubernetes test suite. All pull requests      [16] https://www.opencontainers.org/
that go to https://github.com/Kubernetes-incubator/cri-o [33]      [17]	https://github.com/opencontainers/runtime-spec/blob/
run against the entire Kubernetes test suite. You cannot get             master/spec.md
a pull request into CRI-O without passing the tests. CRI-O is      [18]	https://github.com/opencontainers/image-spec/blob/
fully open, and we have had contributors from several differ-            master/spec.md
ent companies, including Intel, SUSE, IBM, Google, Hyper.          [19] https://github.com/projectatomic/atomic
sh. As long as a majority of maintainers agree to a patch to       [20] https://twitter.com/runc0m
CRI-O, it will get accepted, even if the patch is not something    [21] https://github.com/projectatomic/skopeo
that Red Hat wants.                                                [22]	https://developers.redhat.com/blog/2016/05/02/
                                                                         introducing-atomic-scan-container-vulnerability-detection/
Conclusion                                                         [23] https://github.com/containers/image
I hope this deep dive helps you understand how Linux con-          [24] https://github.com/containers/image
tainers have evolved. At one point, Linux containers were an       [25] http://rhelblog.redhat.com/2017/05/11/skopeo-copy-to-the-
every-vendor-for-themselves situation. Docker helped focus               rescue/
on a de facto standard for image creation and simplifying the      [26] https://access.redhat.com/articles/2750891
tools used to work with containers. The Open Container Ini-        [27] https://github.com/containers/storage
tiative now means that the industry is working around a core       [28] https://github.com/coreos/flannel
image format and runtime, which fosters innovation around          [29] https://github.com/coreos/etcd
making tooling more efficient for automation, more secure,         [30] https://twitter.com/nalind
highly scalable, and easier to use. Containers allow us to         [31] https://github.com/projectatomic/buildah
examine installing software in new and novel wayswhether           [32] https://twitter.com/mrunalp
they are traditional applications running on a host, or orches-    [33] https://github.com/Kubernetes-incubator/cri-o
trated micro-services running in the cloud. In many ways,
this is just the beginning.
                                                                   Author
Links                                                              Daniel Walsh has worked in the computer security field for
[1]	https://access.redhat.com/documentation/en-us/red_            almost 30 years. Dan joined Red Hat in August 2001. Dan
     hat_enterprise_linux/6/html/logical_volume_manager_           leads the RHEL Docker enablement team since August
     administration/device_mapper                                  2013, but has been working on container technology for sev-
[2]	https://btrfs.wiki.kernel.org/index.php/Main_Page             eral years. He has led the SELinux project, concentrating on
[3] https://www.kernel.org/doc/Documentation/filesystems/        the application space and policy development. Dan helped
     overlayfs.txt                                                 developed sVirt, Secure Vitrualization. He also created the
[4]	https://linuxcontainers.org/                                  SELinux Sandbox, the Xguest user and the Secure Kiosk.
[5]	https://libvirt.org/                                          Previously, Dan worked Netect/Bindview’s on Vulnerability
[6]	http://sandbox.libvirt.org/                                   Assessment Products and at Digital Equipment Corporation
[7]	https://opensource.com/article/17/6/getting-started-go        working on the Athena Project, AltaVista Firewall/Tunnel
[8]	https://github.com/opencontainers/runc/tree/master/           (VPN) Products. Dan has a BA in Mathematics from the Col-
     libcontainer                                                  lege of the Holy Cross and a MS in Computer Science from
[9] https://github.com/opencontainers/runtime-spec                 Worcester Polytechnic Institute.




Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                17
W O R K I N G
                      ..............   ..
                             .............


  The changing face of
  the hybrid cloud
                                                              BY GORDON HAFF


  Terms and concepts around cloud computing are still new, but evolving.



  DEPENDING UPON                            the event you use
                                            to start the clock,
  cloud computing is only a little more than 10 years old.
                                                                      Nick Carr in his book The Big Switch [1]. It made for a
                                                                      good story but, even early on, the limitations of the anal-
                                                                      ogy became evident [2]. Computing isn’t a commodity in
  Some terms and concepts around cloud computing that                 the manner of electricity. One need look no further than the
  we take for granted today are newer still. The National             proliferation of new features by all of the major public cloud
  Institute of Standards and Technology (NIST) document               providers—as well as in open source cloud software such
  that defined now-familiar cloud terminology—such as In-             as OpenStack—to see that many users aren’t simply look-
  frastructure-as-a-Service (IaaS)—was only published in              ing for generic computing cycles at the lowest price.
  2011, although it widely cir-                                                                           The cloud bursting idea
  culated in draft form for a                                                                          also largely ignored the real-
  while before that.                                                                                   ity that computing is usually
     Among other definitions                                                                           associated with data and you
  in that document was one                                                                             can’t just move large quanti-
  for hybrid cloud. Looking at                                                                         ties of data around instanta-
  how that term has shifted                                                                            neously without incurring big
  during the intervening years                                                                         bandwidth bills and having
  is instructive. Cloud-based                                                                          to worry about the length of
  infrastructures have moved                                                                           time those transfers take.
  beyond a relatively simplis-                                                                         Dave McCrory coined the
  tic taxonomy. Also, it high-                                                                         term data gravity to describe
  lights how priorities familiar                                                                       this limitation.
  to adopters of open source                                                                              Given this rather negative
  software—such as flexibility, portability, and choice—have          picture I’ve painted, why are we talking about hybrid clouds
  made their way to the hybrid cloud.                                 so much today?
     NIST’s original hybrid cloud definition was primarily              As I’ve discussed, hybrid clouds were initially thought of
  focused on cloud bursting, the idea that you might use              mostly in the context of cloud bursting. And cloud burst-
  on-premise infrastructure to handle a base computing                ing perhaps most emphasized rapid, even real-time, shifts
  load, but that you could “burst” out to a public cloud if           of workloads from one cloud to another; however, hybrid
  your usage spiked. Closely related were efforts to provide          clouds also implied application and data portability. Indeed,
  API compatibility between private clouds and public cloud           as I wrote in a CNET post [3] back in 2011: “I think we
  providers and even to create spot markets to purchase               do ourselves a disservice by obsessing too much with ‘au-
  capacity wherever it was cheapest.                                  tomagical’ workload shifting—when what we really care
     Implicit in all this was the idea of the cloud as a sort of      about is the ability to just move from one place to another if
  standardized compute utility with clear analogs to the elec-        a vendor isn’t meeting our requirements or is trying to lock
  trical grid, a concept probably most popularized by author          us in.”



  18                                                               Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
   Since then, thinking about portability across clouds has        puting capabilities and resources that embodies what hy-
evolved even further.                                              brid cloud has become. Hybrid cloud is not so much about
   Linux always has been a key component of cloud porta-           using a private cloud and a public cloud in concert for the
bility because it can run on everything from bare-metal to         same applications. It’s about using a set of services of
on-premise virtualized infrastructures, and from private           many types, some of which are probably built and operat-
clouds to public clouds. Linux provides a well-established,        ed by your IT department, and some of which are probably
reliable platform with a stable API contract against which ap-     sourced externally.
plications can be written.                                            They’ll probably be a mix of Software-as-a-Service
   The widespread adoption of containers has further en-           applications, such as email and customer relationship
hanced the ability of Linux to provide application portability     management. Container platforms, orchestrated by open
across clouds. By providing an image that also contains an         source software such as Kubernetes, are increasingly
application’s dependencies, a container provides portability       popular for developing new applications. Your organiza-
and consistency as applications move from development, to          tion likely is using one of the big public cloud providers
testing, and finally to production.                                for something. And you’re almost certain to be operating
   Linux containers can be applied in many different ways to       some of your own infrastructure, whether it’s a private
problems where ultimate portability, configurability, and iso-     cloud or more traditional on-premise infrastructure.
lation are needed. This is true whether running on-premise,           This is the face of today’s hybrid cloud, which really can
in a public cloud, or a hybrid of the two.                         be summed up as choice—choice to select the most ap-
   Container tools use an image-based deployment model.            propriate types of infrastructure and services, and choice
This makes sharing an application or set of services with          to move applications and data from one location to another
all of their dependencies across multiple environments easy.       when you want to.
   Specifications developed under the auspices of the
Open Container Initiative (OCI) work together to define            Links
the contents of a container image and those dependen-              [1]	http://www.nicholascarr.com/?page_id=21
cies, environments, arguments, and so forth necessary for          [2]	https://www.cnet.com/news/there-is-no-big-switch-for-
the image to be run properly. As a result of these stan-                cloud-computing/
dardization efforts, the OCI has opened the door for many          [3]	https://www.cnet.com/news/cloudbursting-or-just-portable-
other tooling efforts that can now depend on stable run-                clouds/
time and image specs.                                              [4] https://www.redhat.com/en/technologies/storage/
   At the same time, distributed storage can provide data               vansonbourne
portability across clouds using open source technologies
such as Gluster and Ceph. Physical constraints will always
impose limits on how quickly and easily data can be moved          Author
from one location to another; however, as organizations de-        Gordon Haff is Red Hat s cloud evangelist, is a frequent and
ploy and use different types of infrastructure, they increas-      highly acclaimed speaker at customer and industry events,
ingly desire open, software-defined storage platforms that         and helps develop strategy across Red Hat s full portfolio of
scales across physical, virtual, and cloud resources.              cloud solutions. He is the author of Computing Next: How
   This is especially the case as data storage require-            the Cloud Opens the Future in addition to numerous other
ments grow rapidly, because of trends in predictive ana-           publications. Prior to Red Hat, Gordon wrote hundreds of re-
lytics, internet-of-things, and real-time monitoring. In one       search notes, was frequently quoted in publications like The
2016 study [4], 98% of IT decision makers said a more ag-          New York Times on a wide range of IT topics, and advised
ile storage solution could benefit their organization. In the      clients on product and marketing strategies. Earlier in his ca-
same study, they listed inadequate storage infrastructure          reer, he was responsible for bringing a wide range of com-
as one of the greatest frustrations that their organizations       puter systems, from minicomputers to large UNIX servers,
experience.                                                        to market while at Data General. Gordon has engineering
   And it’s really this idea of providing appropriate portabil-    degrees from MIT and Dartmouth and an MBA from Cornell s
ity and consistency across a heterogeneous set of com-             Johnson School.




Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                               19
W O R K I N G
                        ..............   ..
                               .............




  11
                                   reasons to use the
                                   GNOME 3 desktop
                                   environment for Linux                                                                   BY DAVID BOTH
  The GNOME 3 desktop was designed with the goals of being simple, easy to access, and reliable.
  GNOME’s popularity attests to the achievement of those goals.


  IN LATE 2016,                  an upgrade to Fedora 25 caused
                                 issues with the new version of
  KDE [1] Plasma that made it difficult for me to get any work
                                                                              how to perform common tasks and provides a link to more
                                                                              extensive help. The tutorial is also easily accessible after
                                                                              it is dismissed on first boot so it can be accessed at any
  done. So I decided to try other Linux desktop environments                  time. It is very simple and straightforward and provides us-
  for two reasons. First, I needed to get my work done. Second,               ers new to GNOME an easy and obvious starting point.
  having been using KDE exclusively for many years, I thought it              To return to the tutorial later, click on Activities, then click
  might be time to try some different desktops.                               on the square of nine dots which displays the applications.
     The first alternate desktop I tried for several weeks was                Then find and click on the life preserver icon labeled, Help.
  Cinnamon [2], which I wrote about in January 2017, and then                 Clean desktop: With a minimalist approach to a desktop
                                                                           3.	
  I wrote about LXDE [3], which I used for about eight weeks                  environment in order to reduce clutter, GNOME is de-
  and I have found many things about it that I like. I have used              signed to present only the minimum necessary to have a
  GNOME 3 [4] for a few weeks to research this article.                       functional environment. You should see only the top bar
     Like almost everything else in the cyberworld, GNOME is                  (yes, that is what it is called) and all else is hidden until
  an acronym; it stands for GNU Network Object Model. The                     needed. The intention is to allow the user to focus on the
  GNOME 3 desktop was designed with the goals of being                        task at hand and to minimize the distractions caused by
  simple, easy to access, and reliable. GNOME’s popularity                    other stuff on the desktop.
  attests to the achievement of those goals.                                  The top bar: The top bar is always the place to start, no
                                                                           4.	
     GNOME 3 is useful in environments where lots of screen                   matter what you want to do. You can launch applications,
  real-estate is needed. That means both large screens with                   log out, power off, start or stop the network, and more.
  high resolution, and minimizing the amount of space needed                  This makes life simple when you want to do anything.
  by the desktop widgets, panels, and icons to allow access                   Aside from the current application, the top bar is usually
  to tasks like launching new programs. The GNOME project                     the only other object on the desktop.
  has a set of Human Interface Guidelines (HIG) that are used                 The dash: The dash contains three icons by default,
                                                                           5.	
  to define the GNOME philosophy for how humans should                        as shown below. As you start using applications, they
  interface with the computer.                                                are added to the dash so that your most frequently
                                                                              used applications are displayed there. You can also
  My eleven reasons for using GNOME 3
  1.	Choice: GNOME is available in many forms on some
       distributions like my personal favorite, Fedora. The login
       options for your desktop of choice are GNOME Classic,
       GNOME on Xorg, GNOME, and GNOME (Wayland).
       On the surface, these all look the same once they are
       launched but they use different X servers or are built with
       different toolkits. Wayland provides more functionality for
       the little niceties of the desktop such as kinetic scrolling,
       drag-and-drop, and paste with middle click.
  2.	 Getting started tutorial: The getting started tutorial is dis-
       played the first time a user logs into the desktop. It shows



  20                                                                    Open Source Yearbook 2017     . CC BY-SA 4.0 . O     pensource.com
    add application icons to the dash yourself from the ap-        8.	 Application display: In order to access a different run-
    plication viewer.                                                   ning application that is not visible you click on the activity
                                                                        menu. This displays all of the running applications in a
                                                                        matrix on the desktop. Click on the desired application
                                                                        to bring it to the foreground. Although the current appli-
                                                                        cation is displayed in the Top Bar, other running applica-
                                                                        tions are not.
                                                                   9.	 Minimal window decorations: Open windows on the
                                                                        desktop are also quite simple. The only button appar-
                                                                        ent on the title bar is the “X” button to close a window.
                                                                        All other functions such as minimize, maximize, move to
                                                                        another desktop, and so on, are accessible with a right-
                                                                        click on the title bar.
                                                                   10.	New desktops are automatically created: New emp-
                                                                        ty desktops created automatically when the next empty
                                                                        one down is used. This means that there will always be
   Application viewer: I really like the application viewer
6.	                                                                    one empty desktop and available when needed. All of
   that is accessible from the vertical bar on the left side of         the other desktops I have used allow you to set the num-
   the GNOME desktop, above. The GNOME desktop nor-                     ber of desktops while the desktop is active, too, but it
   mally has nothing on it unless there is a running program            must be done manually using the system settings.
   so you must click on the Activities selection on the top        11.	Compatibility: As with all of the other desktops I have
   bar, click on the square consisting of nine dots at the             used, applications created for other desktops will work
   bottom of the dash, which is the icon for the viewer.               correctly on GNOME. This is one of the features that has
                                                                       made it possible for me to test all of these desktops so
                                                                       that I can write about them.

                                                                   Final thoughts
                                                                   GNOME is a desktop unlike any other I have used. Its prime
                                                                   directive is “simplicity.” Everything else takes a back seat to
                                                                   simplicity and ease of use. It takes very little time to learn how
                                                                   to use GNOME if you start with the getting started tutorial. That
                                                                   does not mean that GNOME is deficient in any way. It is a pow-
                                                                   erful and flexible desktop that stays out of the way at all times.

                                                                   Links
                                                                   [1] https://opensource.com/life/15/4/9-reasons-to-use-kde
                                                                   [2]	
                                                                       https://opensource.com/article/17/1/cinnamon-desktop-
                                                                       environment
	The viewer itself is a matrix consisting of the icons of the     [3]	
                                                                       https://opensource.com/article/17/3/8-reasons-use-lxde
    installed applications as shown above. There is a pair of      [4]	
                                                                       https://www.gnome.org/gnome-3/
    mutually exclusive buttons below the matrix, Frequent and
    All. By default, the application viewer shows all installed    Author
    applications. Click on the Frequent button and it shows        David Both is a Linux and Open Source advocate who resides
    only the applications used most frequently. Scroll up and      in Raleigh, North Carolina. He has been in the IT industry for
    down to locate the application you want to launch. The ap-     over forty years and taught OS/2 for IBM where he worked for
    plications are displayed in alphabetical order by name.        over 20 years. While at IBM, he wrote the first training course
   	 The GNOME [4] website and the built-in help have             for the original IBM PC in 1981. He has taught RHCE classes
    more detail on the viewer.                                     for Red Hat and has worked at MCI Worldcom, Cisco, and the
7.	Application ready notifications: GNOME has a neat              State of North Carolina. He has been working with Linux and
    notifier that appears at top of screen when the window         Open Source Software for almost 20 years. David has written
    for a newly launched app is open and ready. Simply click       articles for OS/2 Magazine, Linux Magazine, Linux Journal and
    on the notification to switch to that window. This saved       OpenSource.com. His article “Complete Kickstart,” co-authored
    me some time compared to searching for the newly               with a colleague at Cisco, was ranked 9th in the Linux Maga-
    opened application window on some other desktops.              zine Top Ten Best System Administration Articles list for 2008.



Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                   21
W O R K I N G
                       ..............   ..
                              .............



  7               cool KDE tweaks
                  that will change your life
                                                                                                                   BY SETH KENLON


  KDE’s Plasma desktop offers a ton of options to customize your environment for the way you work.
  Here are seven to check out.




  THE GREAT THING                           about KDE’s Plas-
                                            ma desktop [1] is
  that it’s universally familiar enough for anybody to use, but
  it’s also got all the knobs and switches needed to become
  a power user. There’s no way to cover all the great options
  available in the customizable desktop environment here,
  but these seven tweaks can change your Plasma experience
  for the better.
     These are based on KDE 5. Most of them also apply to
  KDE 4, although in some cases extra packages are needed,
  or the configuration options are in slightly different locations.

  1. Get a full-screen app launcher                                      2. Manage your fonts
  The thing about starting with all the options in the world is          For the artistically inclined (or just those addicted to fonts),
  that you can imitate anything, including GNOME. When                   KDE provides a very good font manager. Launch it as Font
  GNOME3 came out, it introduced the crazy idea of having a              Management.
  full-screen application launcher, combining a complete appli-
  cation list with a favorites section in the form of a dock and
  providing access to a dynamic list of virtual desktops. This
  idea was “borrowed” by Mac OS X as Launchpad, and now it
  can be mimicked with KDE’s Plasma Desktop.
     For me, the full-screen launcher’s appeal isn’t that it can
  imitate GNOME3; it’s about getting an alphabetized listing
  of all the applications on my system so I can find them
  without having to guess which category they were tagged
  into.
     To create a full-screen launcher on Plasma, add the
  Application Dashboard widget to your kicker or desktop.
  Once added, you’ll have a button to access it. On KDE 4.x,
  install the Homerun package, which provides a file in /usr/bin           For the everyday desktop user, a font manager provides
  that you can use to launch a similar interface.                        a centralized interface for font previews, installation, and
     The app launcher’s interface is robust. Type to search for          removal. For artists, the KDE 5 font manager enables the
  a specific application or use your mouse or arrow keys to              creation of font groups and the ability to enable and disable
  navigate and browse. On the left are your favorite applica-            them quickly and easily. This means that if I’m working on
  tions and on the right are categories, including one that lists        graphics for a tabletop RPG set in the Old West, I can quickly
  everything alphabetically.                                             deactivate all the futuristic fonts and activate the old classic



  22                                                                  Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
and Western-themed fonts to make my Inkscape and Scri-                To create a rule, open System Settings and click the Win-
bus interfaces easier to deal with as I work. It’s a great tool,    dow Management icon. Select the Window Rules category
and one that’s relatively hidden.                                   on the left. Create a new rule.

3. Start and stop autostart
I get asked a lot about how to make things start (or stop
them from starting) at login. For big, important services like
CUPS or Apache, the answer is easy to find, but for smaller,
user-centric services the answer can vary from desktop to
desktop. In Plasma, it’s pretty intuitive, but also flexible.
   In System Settings, the Startup and Shutdown control
panel features the Autostart category. Here, you can view
services that autostart when you log in. The interface allows
for several categories of services, such as .desktop files
(nearly any GUI application installed on your system has
one) or even custom scripts.                                        You can base a rule off the string in a window’s title bar, its
                                                                    class, host name, or other properties. The easiest way to
                                                                    focus in on a window is to use the Detect Window Proper-
                                                                    ties button. Once set, you can prescribe where the window
                                                                    appears, the size at which it spawns, how it behaves, and
                                                                    much more. I have several rules for application windows that
                                                                    I have specific arrangements for, and it has invariably trans-
                                                                    formed, for the better, the way I work.

                                                                    5. Remap your keys
                                                                    The desktop isn’t the only thing you can customize in Plas-
                                                                    ma. Your whole keyboard is open for customization, and it’s
                                                                    amazing what you can do.
                                                                       Keyboard settings are found in the Input Devices panel
                                                                    of System Settings. In the Advanced tab of the Keyboard
                                                                    category, you can make all kinds of adjustments, including
                                                                    the two I prefer.
                                                                       The Caps Lock key, while useful on a typewriter, is (as far
                                                                    as I can tell) entirely vestigial in modern typing. In the rare
                                                                    instances that I need to write in capital letters, I either use the
This is as useful for starting services as it is for stopping       Alt-Shift-u macro or a stylesheet rule in Emacs, or I just hold
something from autostarting. For a while, I was using               the Shift key. Most Chromebooks, not insignificantly, have
a file-sharing client daily, so I let it autostart as a conve-      dropped the Caps Lock key in favor of a Search key.
nience. Although I used it less after the project was over,
I had no reason to uninstall it entirely, so I just stopped it
from autostarting.

4. Set window rules
Have you ever been embroiled in a repetitive task only to
realize that at least half of the steps involve constantly repo-
sitioning and adjusting the windows that pop up? I notice it
any time I’m writing an article or documentation that requires
several screenshots, or when I’m composing in Qtractor and
find myself losing the mixer and synth windows.
   While the quick fix is to set the Keep above others option
in the window’s right-click menu, that only lasts as long as
that instance of the window is open. KDE’s Window Man-
agement control panel lets you hard-code rules for windows          If you similarly have no use for Caps Lock, KDE lets you
that match a variety of conditions.                                 adjust the function of that key.



Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                                   23
W O R K I N G
                        ..............   ..
                               .............
     In addition to the Caps Lock, I usually find at least one other      7. Connect your computer and your phone
  key on any given keyboard that I never use. Sometimes it’s the          I’m not a heavy mobile phone user, but I work from home, so
  Menu key, other times it’s an extra Alt or Control on the right         I’m required to have one. I activated something called KDE
  side of the keyboard, or an extra Forward Delete, or an extra           Connect, located in System Settings, because it sounded
  Enter. In KDE, you can set a spare key to what is called the            appropriate: I have a mobile phone, so I’ll install the thing
  Compose key. The Compose key is a prefix key; you press it,             that says it’s for mobile devices. Zero expectations.
  and then you press some other sequence of keys to compose                  As it turns out, KDE Connect is a really great bit of intersti-
  a new character. For instance, pressing Compose followed                tial glue binding together Android and KDE. Can you control
  by an e, followed by an ' produces the é character. Pressing            the mouse cursor of your desktop with your phone? Yes, you
                                                       1
  Compose followed by a 1 and then 2 produces a /2 character.             can. Can you type input from your phone to your computer?
                                                                          Yes—should you ever inexplicably prefer a touchscreen to a
                                                                          proper keyboard. Can you send files back and forth between
                                                                          devices? Yes, you can do that too. Get notifications from
                                                                          your phone in the notification widget of KDE? Got it.
                                                                             It even has multimedia controls, so when you answer a
                                                                          call on your phone, Amarok or VLC pauses the music you’re
                                                                          playing while you take the call, and then resumes playing
                                                                          when you hang up.
                                                                             There are plenty of other little features, and that’s what
  There are lots of “hidden” characters. They’re not terribly             makes it so nice. It’s one of those applications that doesn’t
  easy to memorize, but you start to remember the ones you                do anything that you’d consider absolutely necessary, but it
  use a lot. Get a list of all possible combinations here /usr/           does a lot of little things that make life easier.
  share/X11/locale/en_US.UTF-8/Compose.

  6. Create a Qt look-alike
  It’s probably already configured by your distribution, but a
  common problem people run into (especially those who are
  experimenting with their system) is why their GNOME appli-
  cations don’t look the same as their KDE applications. Most
  distributions take care of this in advance, but things can fall
  out of sync if you accidentally remove a package or a config
  file that controls the theme settings.




                                                                          Links
                                                                          [1] https://www.kde.org/plasma-desktop
                                                                          [2]	http://www.imdb.com/name/nm1244992
  On the Plasma desktop, KDE can theme GTK applications                   [3]	http://people.redhat.com/skenlon
  so that everything looks like its using the same toolkit and
  the same theming engine. In the System Settings in the                  Author
  Application Style panel, you can set the theme that your                Seth Kenlon is an independent multimedia artist, free culture
  GTK apps use, the font, icon set, a fallback theme, and cur-            advocate, and UNIX geek. He has worked in the film [2] and
  sor style. It’s not very exciting, but it’s a tremendous relief to      computing [3] industry, often at the same time. He is one of
  someone who accidentally removed a theming configuration                the maintainers of the Slackware-based multimedia produc-
  and has been stuck staring at raw GTK widgets for a week.               tion project, http://slackermedia.info



  24                                                                   Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
                                                                                         ....................       W O R K I N G




Which technologies are poised
to take over in open source?
                                                                                                               BY SCOTT HIRLEMAN

These technologies are quickly gaining ground on open source stalwarts,
creating opportunities for people who become proficient in them.



WHEN YOU THINK                           of open source tech-
                                         nologies, you probably
think of the stalwarts, the technologies that have been around
                                                                       (Docker is the only younger technology represented). However,
                                                                       looking to the next 20 top technologies, we see an onslaught
                                                                       of new arrivals to the industry: 16% of people surveyed are us-
for years and years. It makes sense: According to a survey             ing Apache Cassandra (released in 2008, 1.0 release in 2011),
conducted in Q4 of 2016 by my company, Greythorn [1], 30%+             15% are using Spark (open sourced in 2012, 1.0 release in
of participants said established technologies are among the            2014), 14% are using NGINX (1.0 release in 2011), and 11%
top ten they primarily use.                                            are using Kafka (released in early 2011, not at 1.0 release).
   They may not continue dominating the market for long, how-             JavaScript is firmly ensconced on the frontend, along with
ever. We compared our survey results from the past three years         HTML and CSS, but it is also gaining popularity on the backend
to identify trends, and our data shows that newer technologies         with Node.JS: 14% of respondents said they were currently us-
are gaining significant ground on established technologies. For        ing it. AngularJS was the most popular JavaScript framework
example, Docker is used by 25% of survey respondents, the              on the frontend at 11% share. ReactJS, which was released in
eighth highest of any technology in the report—and it was only         early 2013, is already gaining users quickly, reaching 7%.
released in 2013. NGINX, used by 14% of survey respondents,               We are seeing a significant increase in the use of big data,
is gaining quickly on Apache HTTP Server (18%), which seems            DevOps, and microservices-type technologies, which we
to correlate with overall market share trends [2]. Apache Spark        can expect to continue to accelerate going forward.
(15%) is gaining strongly on the older Apache Hadoop, which               So which technologies are ready to take over? Many of
was used by 27% of tech professionals participating in our 2015        them are contenders to be big players, but the number of tools
survey, but only 17% of them in 2017—a decrease of 58%.                people are using also continues to expand. That means there
MapReduce fell similarly from 17% in 2015 to 10% in 2017.              will be increased difficulty in finding expertise in all pieces of
Apache Kafka, despite graduating from Apache incubation less           company-specific tech stacks, but also an opportunity for indi-
than five years ago, reached 11%—not bad for a technology              viduals who want to jump in and develop proficiency in some
that didn’t have a major commercial backer until late 2014.            of these newer technologies. A broader toolset should position
                                                                       you well to take advantage of the technology wave.
                                                                          Which technologies or tools are you using now that you
                                                                       weren’t using three years ago?

                                                                       Links
                                                                       [1] https://twitter.com/Greythorn
                                                                       [2]	https://news.netcraft.com/archives/2017/04/21/april-2017-
                                                                            web-server-survey.html

                                                                       Author
                                                                       Scott is a technical recruiter at Greythorn, focusing on the
                                                                       big data and open source software space (think NoSQL,
                                                                       Spark, NGINX, Graph DBs, etc.). He first started learning
                                                                       about NoSQL in March 2011 and quickly fell in love with the
                             Image by Greythorn, All Rights Reserved   space. Scott has a wide ranging background in tech, includ-
There are several conclusions to draw from the report.                 ing as a VC and in many roles at DataStax, most recently
  When we examine the top 10 technologies, eight out of the            on the community team where he worked closely with the
10 are 15+ years old, and nine out of 10 are 10+ years old             Apache Cassandra community.



Open Source Yearbook 2017     . CC BY-SA 4.0 . O       pensource.com                                                                  25
W O R K I N G
                        ..............   ..
                               .............




  Ops:                                                       It’s everyone’s
                                                             job now
                                                                                                                   BY CHARITY



   The last decade was all about teaching sysadmins to write code. The next challenge will be teaching
   operations to software developers.



   TODAY IS            Sysadmin Appreciation Day [1]. Turn
                       to your nearest and dearest systems
   administrator and be sure to thank them for the work they
                                                                           In other words, ops is how you get stuff done. It’s not op-
                                                                        tional. You ship software, you do ops. If business is the “why”
                                                                        and dev is the “what,” ops is the “how.” We are all interwoven
   do.                                                                  and we all participate in
                                                                        each other’s mandates.
      ”Ops is over.”
      “Sysadmins? That’s so old school.”                                Then
                                                                                                    Business is the “why,”
      “All the good engineering teams are automating opera-             Twenty years ago ops        dev is the “what,” and
   tions out of existence.”                                             engineers were called
                                                                        “sysadmins,” and we              ops is the “how.”
   Do you hear this a lot? I do. People love to say that ops            spent our time tenderly
                              is dead. And sure, you can define         caring for a few precious servers. And then DevOps came
                              “ops” to mean all kinds of unpleas-       along. DevOps means lots of things to lots of people, but one
 Ops is how you               ant things, many of which should
                              die. But that would be neither ac-
                                                                        thing it unquestionably meant to lots and lots of people was
                                                                        this: “Dear Ops: learn to write code.”
 get stuff done.              curate nor particularly helpful.             It was a hard transition for many, but it was an unequiv-
                                 Here’s my definition of opera-         ocally good thing. We needed those skills! Complexity was
   tions: Operations is the constellation of your org’s technical       skyrocketing. We could no longer do our jobs without auto-
   skills, practices, and cultural values around designing, build-      mation, so we needed to learn to write code. It was non-op-
   ing, scaling and maintaining systems.” Ops is the process            tional.
   of delivering value to users. Ops is where beautiful theory
   meets stubborn reality.                                              Now
                                                                        It’s been 10-15 years since the dawn of the automation age,
                                                                        and we’re already well into the early years of its replace-
                                                                        ment: the era of distributed systems.
                                                                           Consider the prevailing trends in infrastructure: contain-
                                                                        ers, schedulers, orchestrators. Microservices. Distributed
                                                                        data stores, polyglot persistence. Infrastructure is becom-
                                                                        ing ever more ephemeral and composable, loosely cou-
                                                                        pled over lossy networks. Components are shrinking in
                                                                        size while multiplying in count, by orders of magnitude in
                                                                        both directions.
                                                                           And then on the client side: take mobile, for heaven’s
                                                                        sake. The combinatorial explosion of (device types * firm-



   26                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
We are in the early                wares * operating sys-           Ops: it’s everyone’s job now
days of a new era of               tems * apps) is a quan-
                                   tum leap in complexity
                                                                    If the first wave of DevOps transformation focused on lev-
                                                                    eling up ops teams at writing code, the second wave flips
distributed systems.               on its own. Mix that in          the script. You simply can’t develop quality software for
                                   with distributed cache           distributed systems without constant attention to its op-
strategy, eventual consistency, datastores that split their         erability, maintainability, and
brain between client and server, IoT, and the outsourcing           debuggability. You can’t build
of critical components to third-party vendors (which are            modern software without a               Dear software
effectively black boxes), and you start to see why we are           grounding in ops.
all distributed systems engineers in the near and pres-                This transformation is well          engineers: It’s
ent future.                                                         underway, and the evidence is        time to learn ops.
   All this change demands another fundamental shift in             everywhere—venture          dollars
thought and approach. You aren’t just writing code: you’re          pouring into “ops for devs” tooling, the maturing consensus
building systems. Distributed systems require dramatically          that devs must share the on-call rotation, software engineers
more focus on operability and resiliency. Compared to the           popping up at traditionally ops-minded conferences, etc.
old monoliths that we could manage using monitoring and             Ops for devs is officially here.
automation, the new systems require new assumptions:                   This is a good thing! It was good for ops to learn to write
                                                                    code, and it is good for devs to learn to own their own ser-
• D istributed systems are never “up;” they exist in a constant    vices. All of these changes lead to better software, tighter
   state of partially degraded service. Accept failure, design      feedback loops, more robust practices in the face of still-ex-
   for resiliency, protect and shrink the critical path.            ploding complexity.
• You can’t hold the entire system in your head or reason             So no, ops isn’t going anywhere. It just doesn’t look like
   about it; you will live or die by the thoroughness of your       it used to. Soon it might even look like a software engineer.
   instrumentation and observability tooling
• You need robust service registration and discovery, load         Links
   balancing, and backpressure between every combination            [1]	https://en.wikipedia.org/wiki/System_Administrator_
   of components                                                         Appreciation_Day
• You need to learn to integrate third-party services; many        [2] https://honeycomb.io
   core functions will be outsourced to teams or companies
   that you have no direct visibility into or influence upon
• You have to test in production, and you have to do so safe-      Author
   ly; you cannot spin up a staging copy of a large distributed     Charity is an engineer and cofounder/CEO of Honeycomb [2],
   system                                                           a next-gen tool for helping software engineers understand
                                                                    their containers/schedulers/microservicified distributed sys-
What do all of these have in common? They’re all hallmarks          tems and polyglot persistence layers. Likes: databases,
of great operations engineering. And they’re no longer op-          operations under pressure, expensive whiskey. Hates: data-
tional either. In other words: “Dear software engineers: time       bases, flappy pages, cheap whiskey. Probably swears more
to learn ops.”                                                      than you.




Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                              27
W O R K I N G
                        ..............   ..
                               .............

  Why open source should
  be the first choice for
  cloud-native environments
                                                                                                         BY ELIZABETH K. JOSEPH


  For the same reasons Linux beat out proprietary software, open source
  should be the first choice for cloud-native environments.



  LET’S TAKE A TRIP                   back in time to the
                                      1990s, when propri-
  etary software reigned, but open source was starting to
                                                                          Ubuntu box packaging on a Best Buy shelf



  come into its own. What caused this switch, and more im-
  portantly, what can we learn from it today as we shift into
  cloud-native environments?

  An infrastructure history lesson
  I’ll begin with a highly opinionated, open source view of in-
  frastructure’s history over the past 30 years. In the 1990s,
  Linux was merely a blip on most organizations’ radar, if they
  knew anything about it. You had early buy-in from compa-
  nies that quickly saw the benefits of Linux, mostly as a
  cheap replacement for proprietary Unix, but the standard
  way of deploying a server was with a proprietary form of
  Unix or—increasingly—by using Microsoft Windows NT.
      The proprietary nature of this tooling provided a fertile eco-
  system for even more proprietary software. Software was                 Where I think things changed was with the rise of the LAMP
  boxed up to be sold in stores. Even open source got in on the           stack (Linux, Apache, MySQL, and PHP/Perl/Python).
  packaging game; you could buy Linux on the shelf instead                   The LAMP stack is a major success story. It was stable,
  of tying up your internet connection downloading it from free           scalable, and relatively user-friendly. At the same time, I
  sources. Going to the store or working with your software               started seeing dissatisfaction with proprietary solutions.
  vendor was just how you got software.                                   Once customers had this taste of open source in the LAMP
                                                                          stack, they changed what they expected from software, in-
                                                                          cluding:

                                                                          • reluctance
                                                                                        to    Where I think things changed
                                                                             be locked in by
                                                                             a vendor,
                                                                                                was with the rise of the LAMP
                                                                          • concern over      stack (Linux, Apache, MySQL,
                                                                             security,
                                                                          • desire to fix
                                                                                                      and PHP/Perl/Python).
                                                                             bugs themselves, and
                                                                          • recognition that innovation is stifled when software is
                                                                             developed in isolation.



  28                                                                   Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
   On the technical side, we also saw a massive change in               But what about these platforms themselves?
   how organizations use software. Suddenly, downtime for a
   website was unacceptable. There was a move to a greater              “Most people just consume the cloud without
   reliance on scaling and automation. In the past decade es-
                                                                        thinking ... many users are sinking cost into
   pecially, we’ve seen a move from the traditional “pet” mod-
                                                                        infrastructure that is not theirs, and they
   el of infrastructure to a “cattle” model, where servers can
                                                                        are giving up data and information about
   be swapped out and replaced, rather than kept and named.
   Companies work with massive amounts of data, causing a               themselves without thinking.”
   greater focus on data retention and the speed of processing          —Edward Snowden, OpenStack Summit, May 9, 2017
   and returning that data to users.
      Open source, with open communities and increasing in-           It’s time to put more thought into our knee-jerk reaction to
   vestment from major companies, provided the foundation             move or expand to the cloud.
   to satisfy this change in how we started using software.              As Snowden highlighted, now we risk of losing control of
   Systems administrators’ job descriptions began requiring           the data that we maintain for our users and customers. Se-
   skill with Linux and familiarity with open source technol-         curity aside, if we look back at our list of reasons for switch-
   ogies and methodologies. Through the open sourcing of              ing to open source, high among them were also concerns
   things like Chef cookbooks and Puppet modules, admin-              about vendor lock-in and the inability to drive innovation or
                                          istrators could share       even fix bugs.
                                          the configuration of           Before you lock yourself and/or your company into a pro-
                                          their tooling. No lon-      prietary platform, consider the following questions:
Open source is ubiquitous                 ger were we individ-
today, and so is the                      ually configuring and       • Is the service I’m using adhering to open standards, or
                                          tuning MySQL in                am I locked in?
tooling surrounding it.                   silos; we created a         • What is my recourse if the service vendor goes out of
                                          system for handling            business or is bought by a competitor?
   the basic parts so we could focus on the more interest-            • Does the vendor have a history of communicating
   ing engineering work that brought specific value to our               clearly and honestly with its customers about downtime,
   employers.                                                            security, etc.?
      Open source is ubiquitous today, and so is the tooling sur-     • Does the vendor respond to bugs and feature requests,
   rounding it. Companies once hostile to the idea are not only          even from smaller customers?
   embracing open source through interoperability programs            • Will the vendor use our data in a way that I’m not com-
   and outreach, but also by releasing their own open source             fortable with (or worse, isn’t allowed by our own customer
   software projects and building communities around it.                 agreements)?
                                                                      • Does the vendor have a plan to handle long-term,
   A “Microsoft ❤ Linux” USB stick
                                                                         escalating costs of growth, particularly if initial costs
                                                                         are low?

                                                                      You may go through this questionnaire, discuss each of the
                                                                      points, and still decide to use a proprietary solution. That’s
                                                                      fine; companies do it all the time. However, if you’re like me
                                                                      and would rather find a more open solution while still benefit-
                                                                      ing from the cloud, you do have options.

                                                                      Beyond the proprietary cloud
   Turning to the cloud                                               As you look beyond proprietary cloud solutions, your first
   Today, we’re living in a world of DevOps and clouds. We’ve         option to go open source is by investing in a cloud provider
   reaped the rewards of the innovation that open source move-        whose core runs on open source software. OpenStack [2] is
   ments brought. There’s a sharp rise in what Tim O’Reilly           the industry leader, with more than 100 participating organiza-
   called “inner-sourcing [1],” where open source software de-        tions and thousands of contributors in its seven-year history
   velopment practices are adopted inside of companies. We’re         (including me for a time). The OpenStack project has proven
   sharing deployment configurations for cloud platforms. Tools       that interfacing with multiple OpenStack-based clouds is not
   like Terraform are even allowing                                                         only possible, but relatively trivial. The
   us to write and share how we                                                             APIs are similar between cloud compa-
   deploy to specific platforms.       Today, we’re living in a                             nies, so you’re not necessarily locked in

                                       world of DevOps and clouds.
   Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                29
W O R K I N G
                       ..............   ..
                              .............
  to a specific OpenStack vendor. As an open source project,            can derive confidence by knowing what your infrastructure
  you can still influence the features, bug requests, and direc-        looks like “under the hood.”
  tion of the infrastructure.                                              Want more evidence? Visit Open Source Infrastructure’s [9]
     The second option is to continue to use proprietary                website to learn more about the projects making their in-
  clouds at a basic level, but within an open source contain-           frastructures open source (and the extensive amount of
  er orchestration system. Whether you select DC/OS [3]                 infrastructure they’ve released).
  (built on Apache Mesos [4]), Kubernetes [5], or Docker
  in swarm mode [6], these platforms allow you to treat the             Links
  virtual machines served up by proprietary cloud systems               [1]	https://opensource.com/life/16/11/create-internal-
  as independent Linux machines and install your platform                    innersource-community
  on top of that. All you need is Linux—and don’t get imme-             [2]	https://www.openstack.org/
  diately locked into the cloud-specific tooling or platforms.          [3]	https://dcos.io/
  Decisions can be made on a case-by-case basis about                   [4]	http://mesos.apache.org/
  whether to use specific proprietary backends, but if you              [5]	https://kubernetes.io/
  do, try to keep an eye toward the future should a move                [6] https://docs.docker.com/engine/swarm/
  be required.                                                          [7]	https://www.socallinuxexpo.org/
     With either option, you also have the choice to depart             [8]	https://opensource.com/article/17/3/growth-open-source-
  from the cloud entirely. You can deploy your own OpenStack                 project-infrastructures
  cloud or move your container platform in-house to your own            [9]	https://opensourceinfra.org/
  data center.
                                                                        Author
  Making a moonshot                                                     After spending a decade doing Linux systems administration,
   To conclude, I’d like to talk a bit about open source proj-          today Elizabeth K. Joseph works as a developer advocate at
  ect infrastructures. Back in March, participants from various         Mesosphere focused on DC/OS, Apache Mesos and Marathon.
  open source projects convened at the Southern California                 As a systems administrator, she worked for a small ser-
  Linux Expo [7] to talk about running open source infrastruc-          vices provider in Philadelphia before joining HPE where
  tures for their projects. (For more, read my summary of this          she worked for four years on the geographically distributed
  event [8].) I see the work these projects are doing as the final      OpenStack Infrastructure team. This team runs the fully open
  step in the open sourcing of infrastructure. Beyond the basic         source infrastructure for OpenStack development and lead
  sharing that we’re doing now, I believe companies and or-             to an interest in other open source projects that have opened
  ganizations can make far more of their infrastructures open           up their infrastructures. While working on OpenStack she
  source without giving up the “secret sauce” that distinguish-         wrote the book Common OpenStack Deployments.
  es them from competitors.                                                She is a former member of the Ubuntu Community Council
     The open source projects that have open sourced their in-          and the co-author of the 8th and 9th editions of The Official
  frastructures have proven the value of allowing multiple com-         Ubuntu Book. At home, she serves on Board of Directors
  panies and organizations to submit educated bug reports,              for Partimus.org, a non-profit in the San Francisco Bay Area
  and even patches and features, to their infrastructure. Sud-          providing Linux-based computers to schools and educa-
  denly you can invite part-time contributors. Your customers           tion-focused community centers in need.




  30                                                                 Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
                                                                                          ....................       W O R K I N G




What’s the point of DevOps?
                                                                                                                     BY MATT MICENE

True organizational culture change helps you bridge the gaps you thought were uncrossable.



THINK ABOUT                       the last time you tried to
                                  change a personal habit. You
likely hit a point where you needed to alter the way you
                                                                        those approaches, we need to better understand the drive
                                                                        for change in the first place.

think and make the habit less a part of your identity. This is          Clearing the fences
difficult—and you’re only trying to change your own ways                  To understand the need for DevOps, which tries to recom-
of thinking.                                                            bine the traditionally “split” entities of “development” and “op-
   So you may have tried to put yourself in new situations.             erations,” we must first understand how the split came about.
New situations can actually help us create new habits, which            Once we “know the use of it,” then we can see the split for
in turn lead to new ways of thinking.                                   what it really is—and dismantle it if necessary.
   That’s the thing about successful change: It’s as much
                                                                          “In the matter of reforming things, as distinct from
about outlook as outcome. You need to know why you’re
                                                                          deforming them, there is one plain and simple
changing and where you’re headed (not just how you’re
                                                                          principle; a principle which will probably be called
going to do it), because change for its own sake is often
short-lived and short-sighted.                                            a paradox. There exists in such a case a certain
   Now think about the changes your IT organization needs                 institution or law; let us say, for the sake of sim-
to make. Perhaps you’re thinking about adopting something                 plicity, a fence or gate erected across a road. The
like DevOps. This thing we call “DevOps” has three compo-                 more modern type of reformer goes gaily up to it
nents: people, process, and tools. People and process are                 and says, “I don’t see the use of this; let us clear
the basis for any organization. Adopting DevOps, therefore,               it away.” To which the more intelligent type of
requires making fundamental changes to the core of most                   reformer will do well to answer: “If you don’t see
organizations—not just learning new tools.                                the use of it, I certainly won’t let you clear it away.
   And like any change, it can be short-sighted. If you’re fo-            Go away and think. Then, when you can come
cused on the change as a point solution—“Get a better tool                back and tell me that you do see the use of it, I
to do alerting,” for example—you’ll likely come up with a nar-            may allow you to destroy it.”
row vision of the problem. This mode of thinking may furnish
a tool with more bells and whistles and a better way of han-              —G.K Chesterton, 1929
dling on-call rotations. But it can’t fix the fact that alerts aren’t     Today we have no single theory of management, but we
going to the right team, or that those failures remain failures         can trace the origins of most modern management theory to
since no one actually knows how to fix the service.                     Frederick Winslow Taylor. Taylor was a mechanical engineer
   The new tool (or at least the idea of a new tool) creates            who created a system for measuring the efficiency of work-
a moment to have a conversation about the underlying is-                ers at a steel mill. Taylor believed he could apply scientific
sues that plague your team’s                                                                             analysis to the laborers in
views on monitoring. The                                                                                 the mill, not only to improve
new tool allows you to make                                                                              individual tasks, but also to
bigger changes—changes to                                                                                prove that there was a dis-
your beliefs and practices—                                                                              coverable best method for
which, as the foundation of                                                                              performing any task.
your organization, are even                                                                                 We can easily draw a
more important.                                                                                          historical tree with Taylor at
   Creating deeper change                                                                                the root. From Taylor’s ear-
requires new approaches                                                                                  ly efforts in the late 1880s
to the notion of change al-                                                                              emerged the time-motion
together. And to discover                                                                                study and other quality-im-



Open Source Yearbook 2017        . CC BY-SA 4.0 . O     pensource.com                                                                 31
W O R K I N G
                        ..............   ..
                               .............
  provement programs that span the 1920s all the way to to-                  The “Dev” and “Ops” split is not the result of personality,
  day, where we see Six Sigma, Lean, and the like. Top-down,              diverging skills, or a magic hat placed on the heads of new
  directive-style management, coupled with a methodical ap-               employees; it’s a byproduct of Taylorism and Sloanianism.
  proach to studying process, dominates mainstream busi-                  Clear and impermeable boundaries between responsibili-
  ness culture today. It’s primarily focused on efficiency as the         ties and personnel is a management function coupled with
  primary measure of worker success.                                      a focus on worker efficiency. The management split could
     If Taylor is our root of our historical tree, then our next ma-      have easily landed on product or project boundaries instead
  jor fork in the trunk would be Alfred P. Sloan of General Mo-           of skills, but the history of business management theory
  tors in the 1920s. The structure Sloan created at GM would              through today tells us that skills-based grouping is the “best”
  not only hold strong there until the 2000s, but also prove to           way to be efficient.
  be the major model of the corporation for much of the next                 Unfortunately, those boundaries create tensions, and
  50 years.                                                               those tensions are a direct result of opposing goals set by
     In 1920, GM was experiencing a crisis of management—                 different management chains with different objectives. For
  or rather a crisis from the lack thereof. Sloan wrote his “Or-          example:
  ganizational Study” for the board, proposing a new struc-
  ture for the multitudes of GM divisions. This new structure                                   Agility ⟷ Stability
  centered on the concept of “decentralized operations with
  centralized control.” The individual divisions, associated now                Drawing new users ⟷ Existing users’ experience
  with brands like Chevrolet, Cadillac, and Buick, would oper-
  ate independently while providing central management the                   Application getting features ⟷ Application available to use
  means to drive strategy and control finances.
     Under Sloan’s recommendations (and later guidance as                        Beating the competition ⟷ Protecting revenue
  CEO), GM rose to a dominant position in the US auto indus-
  try. Sloan’s plan created a highly successful corporation from             Fixing problems that come up ⟷ Preventing problems
  one on the brink of disaster. From the central view, the auton-                            before they happen
  omous units are black boxes. Incentives and goals get set at
  the top levels, and the teams at the bottom drive to deliver.              Today, we can see growing recognition among orga-
     The Taylorian idea of “best practices”—standard, inter-              nizations’ top leaders that the existing business culture
  changeable, and repeatable behaviors—still holds a place                (and by extension the set of tensions it produces) is a
  in today’s management ideals, where it gets coupled with                serious problem. In a 2016 Gartner report, 57 percent of
  the hierarchical model of the Sloan corporate structure,                respondents said that culture change was one of the ma-
  which advocates rigid departmental splits and silos for                 jor challenges to the business through 2020. The rise of
  maximum control.                                                        new methods like Agile and DevOps as a means of affect-
     We can point to several management studies that demon-               ing organizational changes reflects that recognition. The
  strate this. But business culture isn’t created and propa-              rise of “shadow IT” [1] is the flip side of the coin; recent
  gated only through reading books. Organizational culture                estimates peg nearly 30 percent of IT spend outside the
  is the product of real people in actual situations performing           control of the IT organization.
  concrete behaviors that propel cultural norms through time.                These are only some of the “culture concerns” that busi-
  That’s how things like Taylorism and and Sloanianism get                nesses are having. The need to change is clear, but the path
  solidified and come to appear immovable.                                ahead is still governed by the decisions of yesterday.
     Technology sector funding is a case in point. Here’s how
  the cycle works: Investors only invest in those companies               Resistance isn’t futile
  they believe could achieve their particular view of success.
                                                                            “Bert Lance believes he can save Uncle Sam
  This model for success doesn’t necessarily originate from
  the company itself (and its particular goals); it comes from              billions if he can get the government to adopt
  a board’s ideas of what a successful company should look                  a simple motto: ‘If it ain’t broke, don’t fix it.’ He
  like. Many investors come from companies that have sur-                   explains: ‘That’s the trouble with government:
  vived the trials and tribulations of running a business, and              Fixing things that aren’t broken and not fixing
  as a result they have different blueprints for what makes                 things that are broken.’”
  a successful company. They fund companies that can be
                                                                            —Nation’s Business, May 1977
  taught to mimic their models for success. So companies
  wishing to acquire funding learn to mimic. In this way, the             Typically, change is an organizational response to something
  start-up incubator is a direct way of reproducing a suppos-             gone wrong. In this sense, then, if tension (even adversi-
  edly ideal structure and culture.                                       ty) is the normal catalyst for change, then the resistance to



  32                                                                   Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
change is an indicator of success. But overemphasis on suc-            Change isn’t just about rebuilding the organization; it’s
cessful paths can make organizations inflexible, hidebound,         also about new ways to cross historically uncrossable
and dogmatic. Valuing policy navigation over effective re-          gaps—resolving those tensions I mapped earlier by refus-
sults is a symptom of this growing rigidity.                        ing to position things like “agility” and “stability” as mutu-
   Success in traditional IT departments has thickened the          ally exclusive forces. Setting up cross-silo teams focused
walls of the IT silo. Other departments are now “custom-
ers,” not co-workers. Attempts to shift IT away from being a
cost-center create a new operating model that disconnects
IT from the rest of the business’ goals. This in turn creates       Change isn’t just about rebuilding the
resistance that limits agility, increases friction, and decreas-    organization; it’s also about new ways
es responsiveness. Collaboration gets shelved in favor of
“expert direction.” The result is an isolationist view of IT can    to cross historically uncrossable gaps.
only do more harm than good.
   And yet as “software eats the world,” IT becomes more            on outcomes over functions is one of the strategies in
and more central to the overall success of the organization.        play. Bringing different teams, each of whose work relies
Forward-thinking IT organizations recognize this and are al-        on the others, together around a single project or goal is
ready making deliberate changes to their playbooks, rather          one of the most common approaches. Eliminating friction
than treating change as something to fear.                          between these teams and improving communications
   For instance, Facebook consulted with anthropologist             yields massive improvements—even while holding to the
Robin Dunbar [2] on its approach to social groups, but real-        iron silo structures of management (silos don’t need to
ized the impact this had on internal groups (not just external      be demolished if they can be mastered). In these cases,
users of the site) as the company grew. Zappos’ culture has         resistance to change isn’t an indicator of success; an em-
garnered so much praise that the organization created a de-         brace of change is.
partment focused on training others in their views on core             These aren’t “best practices.” They’re simply a way for you
values and corporate culture. And of course, this book is a         to examine your own fences. Every organization has unique
companion to The Open Organization , a book that shows              fences created by the people within it. And once you “know
how open principles applied to management—transparency,             the use of it,” you can decide whether it needs dismantling
participation, and community—can reinvent the organization          or mastering.
for our fast-paced, connected era.                                     This article is part of The Open Organization Guide to IT
                                                                    culture change [4].
Resolving to change
                                                                    Links
  ”If the rate of change on the outside exceeds                     [1]	https://thenewstack.io/parity-check-dont-afraid-shadow-yet/
  the rate of change on the inside, the end is                      [2]	http://www.npr.org/2017/01/13/509358157/is-there-a-limit-
  near.”                                                                 to-how-many-friends-we-can-have
                                                                    [3] https://en.wikipedia.org/wiki/ITIL
  —Jack Welch, 2004
                                                                    [4] https://opensource.com/open-organization/resources/
A colleague once told me he could explain DevOps to a proj-              culture-change
ect manager using only the vocabulary of the Information
Technology Infrastructure Library [3] framework.                    Author
   While these frameworks appear to be opposed, they actu-          Matt Micene is an evangelist for Linux and containers at Red
ally both center on risk and change management. They sim-           Hat. He has over 15 years of experience in information tech-
ply present different processes and tools for such manage-          nology, ranging from architecture and system design to data
ment. This point is important to note when to talking about         center design. He has a deep understanding of key technol-
DevOps outside IT. Instead of emphasizing process break-            ogies, such as containers, cloud computing and virtualiza-
downs and failures, show how smaller changes introduce              tion. His current focus is evangelizing Red Hat Enterprise
less risk, and so on. This is a powerful way to highlight the       Linux, and how the OS relates to the new age of compute
benefits changing a team’s culture: Focusing on the new ca-         environments. He’s a strong advocate for open source soft-
pabilities instead of the old problems is an effective agent for    ware and has participated in a few projects. Always watching
change, especially when you adopt someone else’s frame              people, how and why decisions get made, he’s never left his
of reference.                                                       anthropology roots far behind.




Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                                 33
W O R K I N G
                       ..............   ..
                              .............

  The politics of the
  Linux desktop
                                                     BY MIKE BURSELL


  If you’re working in open source, why would you use anything but Linux as your main desktop?



  AT SOME POINT                        in 1997 or 1998—history
                                       does not record exactly
  when—I made the leap from Windows to the Linux desk-
                                                                       law and then my father over to Linux so I could help administer
                                                                       their machines. And then I moved them off Linux so they could
                                                                       no longer ask me to help administer their machines.
  top. I went through quite a few distributions, from Red Hat to          It wasn’t just at home, either: I decided that I would use
  SUSE to Slackware, then Debian, Debian Experimental, and             Linux as my desktop for work, as well. I even made it a con-
  (for a long time thereafter) Ubuntu. When I accepted a role          dition of employment for at least one role. Linux desktop sup-
  at Red Hat, I moved to Fedora, and migrated both my kids             port in the workplace caused different sets of problems. The
  (then 9 and 11) to Fedora as well.                                   first was the “well, you’re on your own: we’re not going to
      For a few years, I kept Windows as a dual-boot option,           support you” email
  and then realized that, if I was going to commit to Linux, then      from IT support.
  I ought to go for it properly. In losing Windows, I didn’t miss      VPNs were touch         Over the years, using Linux
  much; there were a few games that I couldn’t play, but it was        and go, but in the
  around the time that the Civilization franchise was embrac-          end, usually go.
                                                                                               moved from being an uphill
  ing Linux, so that kept me happy.                                       The biggest hur-     struggle to something that
      The move to Linux wasn’t plain sailing, by any stretch of        dle was Microsoft
  the imagination. If you wanted to use fairly new hardware in         Office, until I dis-
                                                                                                              just worked.
  the early days, you had to first ensure that there were any          covered CrossOver [2], which I bought with my own mon-
  drivers for Linux, then learn how to compile and install them.       ey, and which allowed me to run company-issued copies of
  If they were not quite my friends, lsmod and modprobe be-            Word, PowerPoint, and the rest on my Linux desktop. Fonts
  came at least close companions. I taught myself to compile a         were sometimes a problem, and one company I worked for
  kernel and tweak the options                                                                          required Microsoft Lync.
  to make use of (sometimes                                                                             For this, and for a few other
  disastrous) new, “EXPERI-                                                                             applications, I would some-
  MENTAL” features as they                                                                              times have to run a Windows
  came out. Early on, I learned                                                                         virtual machine (VM) on my
  the lesson that you should                                                                            Linux desktop. Was this a
  always keep at least one ker-                                                                         cop out? Well, a little bit: but
  nel in your LILO [1] list that                                                                        I’ve always tried to restrict
  you were sure booted fully. I                                                                         my usage of this approach
  cursed NVidia and grew hor-                                                                           to the bare minimum.
  rified by SCSI. I flirted with
  early journalling filesystem                                                                           But why?
  options and tried to work out                                                                             ”Why?” colleagues would
  whether the different preempt parameters made any notice-            ask. “Why do you bother? Why not just run Windows?”
  able difference to my user experience or not. I began to ac-           ”Because I enjoy pain,” was usually my initial answer, and
  cept that printers would never print—and then they started           then the more honest, “because of the principle of the thing.”
  to. I discovered that the Bluetooth stack suddenly started to          So this is it: I believe in open source. We have a number of
  connect to things.                                                   very, very good desktop-compatible distributions these days,
      Over the years, using Linux moved from being an uphill           and most of the time they just work. If you use well-known
  struggle to something that just worked. I moved my mother-in-        or supported hardware, they’re likely to “just work” pret-



  34                                                                Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
ty much as well as the two obvious
                                             So it’s not a case of why                           you do. It might be a reason to keep
alternatives, Windows or Mac. And           wouldn’t I use Windows or                            a dual-boot system or to do what I did
they just work because many people                                                               (after much soul-searching) and buy
have put much time into using them,         Mac, but why would I ever                            a games console (because Elite Dan-
testing them, and improving them. So        consider not using Linux?                            gerous really doesn’t work on Linux,
it’s not a case of why wouldn’t I use                                                            more’s the pity). It should also be an
Windows or Mac, but why would I ever consider not using Li-               excuse to lobby for your favorite games to be ported to
nux? If, as I do, you believe in open source, and particularly if         Linux.
you work within the open source community or are employed            4. It’s what our customers use, so why would we alien-
by an open source organisation, I struggle to see why you                 ate them? I don’t get this one. Does Microsoft ban visitors
would even consider not using Linux.                                      with Macs from their buildings? Does Apple ban Windows
    I’ve spoken to people about this (of course I have), and here         users? Does Google allow non-Android phones through
are the most common reasons—or excuses—I’ve heard.                        their doors? You don’t kowtow to the majority when you’re
1. I’m more productive on Windows/Mac.                                   the little guy or gal; if you’re working in open source, sure-
2. I can’t use app X on Linux, and I need it for my job.                 ly you should be proud of that. You’re not going to alien-
3. I can’t game on Linux.                                                ate your customer—you’re really not.
4. It’s what our customers use, so why we would alienate            5. “ Open” means choice, and I prefer a proprietary
      them?                                                               desktop, so I use that. Being open certainly does mean
5. ”Open” means choice, and I prefer a proprietary desktop,              you have a choice. You made that choice by working in
      so I use that.                                                      open source. For many, including me, that’s a moral and
Interestingly, I don’t hear “Linux isn’t good enough” much any-           philosophical choice. Saying you embrace open source,
more, because it’s manifestly untrue, and I can show that my              but rejecting it in practice seems mealy mouthed, even
own experience—and that of many colleagues—belies that.                   insulting. Using openness to justify your choice is the
                                                                          wrong approach. Saying “I prefer a proprietary desktop,
Rebuttals                                                                 and company policy allows me to do so” is better. I don’t
Let’s go through those answers and rebut them.                            agree with your decision, but at least you’re not using the
1. I’m more productive on Windows/Mac. I’m sure you                      principle of openness to justify it.
    are. Anyone is more productive when they’re using a plat-        Is using open source easy? Not always. But it’s getting eas-
    form or a system they’re used to. If you believe in open         ier. I think that we should stand up for what we believe in,
    source, then I contest that you should take the time to          and if you’re reading Opensource.com [3], then you probably
    learn how to use a Linux desktop and the associated ap-          believe in open source. And that, I believe, means that you
    plications. If you’re working for an open source organiza-       should run Linux as your main desktop.
    tion, they’ll probably help you along, and you’re unlikely to       Note: I welcome comments, and would love to hear dif-
    find you’re much less productive in the long term. And you       ferent points of view. I would ask that comments don’t just
    know what? If you are less productive in the long term,          list application X or application Y as not working on Linux. I
    then get in touch with the maintainers of the apps that          concede that not all apps do. I’m more interested in justifica-
    are causing you to be less productive and help improve           tions that I haven’t covered above, or (perceived) flaws in my
    them. You don’t have to be a coder. You could submit bug         argument. Oh, and support for it, of course.
    reports, suggest improvements, write documentation, or
    just test the most recent versions of the software. And          Links
    then you’re helping yourself and the rest of the communi-        [1]	https://en.wikipedia.org/wiki/LILO_(boot_loader)
    ty. Welcome to open source.                                      [2]	https://en.wikipedia.org/wiki/CrossOver_(software)
2. I can’t use app X on Linux, and I need it for my job.            [3]	https://opensource.com/
    This may be true. But it’s probably less true than you           [4] https://opensource.com/article/17/11/politics-linux-desktop
    think. The people most often saying this with conviction         [5]	https://aliceevebob.com/
    are audio, video, or graphics experts. It was certainly the
    case for many years that Linux lagged behind in those            Author
    areas, but have a look and see what the other options            I’ve been in and around Open Source since around 1997,
    are. And try them, even if they’re not perfect, and see          and have been running (GNU) Linux as my main desktop
    how you can improve them. Alternatively, use a VM for            at home and work since then: not always easy [4]... I’m a
    that particular app.                                             security bod and architect, and am currently employed as
3. I can’t game on Linux. Well, you probably can, but not           Chief Security Architect for Red Hat. I have a blog “Alice, Eve
    all the games that you enjoy. This, to be clear, shouldn’t       & Bob” [5] where I write (sometimes rather parenthetically)
    really be an excuse not to use Linux for most of what            about security. I live in the UK and like single malts.



Open Source Yearbook 2017      . CC BY-SA 4.0 . O    pensource.com                                                                   35
W O R K I N G
                      ..............   ..
                             .............




  10
                                              open source
                                              technology trends
                                              for 2018
                                                     BY SREEJITH OMANAKUTTAN


  What do you think will be the next open source tech trends? Here are 10 predictions.




  TECHNOLOGY                       is always evolving. New de-
                                   velopments, such as Open-
  Stack, Progressive Web Apps, Rust, R, the cognitive cloud,
                                                                      to thousands of individual members, makes it the future of
                                                                      cloud computing.

  artificial intelligence (AI), the Internet of Things, and more      2. Progressive Web Apps become popular
  are putting our usual paradigms on the back burner. Here is         Progressive Web Apps (PWA) [2], an aggregation of tech-
  a rundown of the top open source trends expected to soar            nologies, design concepts, and web APIs, offer an app-like
  in popularity in 2018.                                              experience in the mobile browser.
                                                                         Traditional websites suffer from many inherent shortcom-
  1. OpenStack gains increasing acceptance                            ings. Apps, although offering a more personal and focused
  OpenStack [1] is essentially a cloud operating system that          engagement than websites, place a huge demand on re-
  offers admins the ability to provision and control huge com-        sources, including needing to be downloaded upfront. PWA
  pute, storage, and networking resources through an intuitive        delivers the best of both worlds. It delivers an app-like expe-
  and user-friendly dashboard.                                        rience to users while being accessible on browsers, indexa-
     Many enterprises are using the OpenStack platform to build       ble on search engines, and responsive to fit any form factor.
  and manage cloud computing systems. Its popularity rests            Like an app, a PWA updates itself to always display the latest
  on its flexible ecosystem,                                                                            real-time information, and,
  transparency, and speed.                                                                              like a website, it is deliv-
  It supports mission-critical                                                                          ered in an ultra-safe HTTPS
  applications with ease and                                                                            model. It runs in a standard
  lower costs compared to al-                                                                           container and is accessible
  ternatives. But OpenStack’s                                                                           to anyone who types in the
  complex structure and its                                                                             URL, without having to in-
  dependency on virtualization,                                                                         stall anything.
  servers, and extensive net-                                                                              PWAs perfectly suit the
  working resources has inhib-                                                                          needs of today’s mobile us-
  ited its adoption by a wider                                                                          ers, who value convenience
  range of enterprises. Using                                                                           and personal engagement
  OpenStack also requires a                                                                             over everything else. That
  well-oiled machinery of skilled staff and resources.                this technology is set to soar in popularity is a no-brainer.
     The OpenStack Foundation is working overtime to fill the
  voids. Several innovations, either released or on the anvil,        3. Rust to rule the roost
  would resolve many of its underlying challenges. As com-            Most programming languages come with safety vs. control
  plexities decrease, OpenStack will surge in acceptance.             tradeoffs. Rust [3] is an exception. The language co-opts
  The fact that OpenStack is already backed by many big               extensive compile-time checking to offer 100% control
  software development and hosting companies, in addition             without compromising safety. The last Pwn2Own [4] com-



  36                                                               Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
petition threw up many serious vulnerabilities in Firefox on          vancement. The latest developments make containers quite
account of its underlying C++ language. If Firefox had been           intuitive and as easy as using a smartphone, not to mention
written in Rust, many of those errors would have manifest-            tuned for today’s needs, where speed and agility can make
ed as compile-time bugs and resolved before the product               or break a business.
rollout stage.
  Rust’s unique approach of built-in unit testing has led de-         7. Machine learning and artificial intelligence
velopers to consider it a viable first-choice open source lan-        expand in scope
guage. It offers an effective alternative to languages such           Machine learning and AI [7] give machines the ability to
as C and Python to write secure code without sacrificing ex-          learn and improve from experience without a programmer
pressiveness. Rust has bright days ahead in 2018.                     explicitly coding the instruction.
                                                                         These technologies are already well entrenched, with
4. R user community grows                                             several open source technologies leveraging them for cut-
The R [5] programming language, a GNU project, is associ-             ting-edge services and applications.
ated with statistical computing and graphics. It offers a wide           Gartner predicts [8] the scope of machine learning and
array of statistical and graphical techniques and is extensi-         artificial intelligence will expand in 2018. Several greenfield
ble to boot. It starts where S [6] ends. With the S language          areas, such as data preparation, integration, algorithm se-
already the vehicle of choice for research in statistical meth-       lection, training methodology selection, and model creation
odology, R offers a viable open source route for data manip-          are all set for big-time enhancements through the infusion of
ulation, calculation, and graphical display. An added benefit         machine learning.
is R’s attention to detail and care for the finer nuances.               New open source intelligent solutions are set to change
   Like Rust, R’s fortunes are on the rise.                           the way people interact with systems and transform the very
                                                                      nature of work.
5. XaaS expands in scope                                              • Conversational platforms, such as chatbots, make the
XaaS, an acronym for “anything as a service,” stands for the             question-and-command experience, where a user asks a
increasing number of services delivered over the internet,               question and the platform responds, the default medium of
rather than on premises. Although software as a service                  interacting with machines.
(SaaS), infrastructure as a service (IaaS), and platform as a         • Autonomous vehicles and drones, fancy fads today, are
service (PaaS) are well-entrenched, new cloud-based mod-                 expected to become commonplace by 2018.
els, such as network as a service (NaaS), storage as a ser-           • The scope of immersive experience will expand beyond
vice (SaaS or StaaS), monitoring as a service (MaaS), and                video games and apply to real-life scenarios such as de-
communications as a service (CaaS), are soaring in popular-              sign, training, and visualization processes.
ity. A world where anything and everything is available “as a
service” is not far away.                                             8. Blockchain becomes mainstream
   The scope of XaaS now extends to bricks-and-mortar                 Blockchain has come a long way from Bitcoin. The technol-
businesses, as well. Good examples are companies such                 ogy is already in widespread use in finance, secure voting,
as Uber and Lyft leveraging digital technology to offer trans-        authenticating academic credentials, and more. In the com-
portation as a service and Airbnb offering accommodations             ing year, healthcare, manufacturing, supply chain logistics,
as a service.                                                         and government services are among the sectors most likely
   High-speed networks and server virtualization that make            to embrace blockchain technology.
powerful computing affordable have accelerated the popu-                  Blockchain distributes digital information. The information
larity of XaaS, to the point that 2018 may become the “year           resides on millions of nodes, in shared and reconciled data-
of XaaS.” The unmatched flexibility, agility, and scalability will    bases. The fact that it’s not controlled by any single author-
propel the popularity of XaaS even further.                           ity and has no single point of failure makes it very robust,
                                                                      transparent, and incorruptible. It also solves the threat of a
6. Containers gain even more acceptance                               middleman manipulating the data. Such inherent strengths
Container technology is the approach of packaging piec-               account for blockchain’s soaring popularity and explain why
es of code in a standardized way so they can be “plugged              it is likely to emerge as a mainstream technology in the im-
and run” quickly in any environment. Container technology             mediate future.
allows enterprises to cut costs and implementation times.
While the potential of containers to revolutionize IT infra-          9. Cognitive cloud moves to center stage
structure has been evident for a while, actual container use          Cognitive technologies, such as machine learning and artifi-
has remained complex.                                                 cial intelligence, are increasingly used to reduce complexity
   Container technology is still evolving, and the complexi-          and personalize experiences across multiple sectors. One
ties associated with the technology decrease with every ad-           case in point is gamification apps in the financial sector,



Open Source Yearbook 2017       . CC BY-SA 4.0 . O    pensource.com                                                               37
W O R K I N G
                       ..............   ..
                              .............
  which offer investors critical investment insights and reduce         between “things” in order to manage software updates, re-
  the complexities of investment models. Digital trust platforms        solve bugs, manage energy, and more.
  reduce the identity-verification process for financial institu-
  tions by about 80%, improving compliance and reducing                 Open source drives innovation
  chances of fraud.                                                     Digital disruption is the norm in today’s tech-centric era.
     Such cognitive cloud technologies are now moving to the            Within the technology space, open source is now pervasive,
  cloud, making it even more potent and powerful. IBM Wat-              and in 2018, it will be the driving force behind most of the
  son is the most well-known example of the cognitive cloud in          technology innovations.
  action. IBM’s UIMA architecture was made open source and
  is maintained by the Apache Foundation. DARPA’s Deep-                 Links
  Dive project mirrors Watson’s machine learning abilities to           [1]	 https://www.openstack.org/
  enhance decision-making capabilities over time by learning            [2]	 https://developers.google.com/web/progressive-web-apps/
  from human interactions. OpenCog, another open source                 [3]	 https://www.rust-lang.org/
  platform, allows developers and data scientists to develop            [4]	 https://en.wikipedia.org/wiki/Pwn2Own
  artificial intelligence apps and programs.                            [5]	 https://en.wikipedia.org/wiki/R_(programming_language)
     Considering the high stakes of delivering powerful and             [6]	 https://en.wikipedia.org/wiki/S_(programming_language)
  customized experiences, these cognitive cloud platforms are           [7]	 https://opensource.com/tags/artificial-intelligence
  set to take center stage over the coming year.                        [8]	 https://sdtimes.com/gartners-top-10-technology-
                                                                              trends-2018/
  10. The Internet of Things connects more things                       [9]	 https://insights.samsung.com/2016/03/17/block-chain-
  At its core, the Internet of Things (IoT) is the interconnection            mobile-and-the-internet-of-things/
  of devices through embedded sensors or other computing                [10]	https://www.fingent.com/
  devices that enable the devices (the “things”) to send and            [11]	https://www.linkedin.com/in/futuregeek/
  receive data. IoT is already predicted to be the next big major
  disruptor of the tech space, but IoT itself is in a continuous        Author
  state of flux.                                                        I have been programming since 2000, and professional-
     One innovation likely to gain widespread acceptance                ly since 2007. I currently lead the Open Source team at
  within the IoT space is Autonomous Decentralized Peer-to-             Fingent [10] as we work on different technology stacks,
  Peer Telemetry (ADEPT [9]), which is propelled by IBM and             ranging from the “boring”(read tried and trusted) to the
  Samsung. It uses a blockchain-type technology to deliver a            bleeding edge. I like building, tinkering with and breaking
  decentralized network of IoT devices. Freedom from a cen-             things, not necessarily in that order.
  tral control system facilitates autonomous communications                Hit me up at: https://www.linkedin.com/in/futuregeek/ [11]




  38                                                                 Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
                                                                                   ....................     W O R K I N G




Kubernetes, standardization,
and security dominated
2017 Linux container news
                                                                                                          BY GORDON HAFF


We round up our most popular Linux container reads from the past year.


                                                                   Kubernetes and orchestration
CONTAINERS                   were big news in 2017, on Open-
                             source.com and throughout the
IT infrastructure community. Three primary storylines dom-
                                                                   Containers on their own are fine for individual developers
                                                                   working on their laptops. However, as Dan Walsh notes in
inated container news over the past year:                          How Linux containers have evolved [1]:
• The first is Kubernetes. Kubernetes gained huge momen-              The real power of containers comes about when you start
   tum as the primary means to combine Open Container              to run many containers simultaneously and hook them to-
   Initiative (OCI)-format containers into managed clusters        gether into a more powerful application. The problem with
   composing an application. Much of Kubernetes’ increas-          setting up a multi-container application is the complexity
   ingly broad acceptance is due to its large and active com-      quickly grows and wiring it up using simple Docker com-
   munity.                                                         mands falls apart. How do you manage the placement or
• The second is standardization and decoupling of compo-          orchestration of container applications across a cluster of
   nents. The OCI published the 1.0 versions of its contain-       nodes with limited resources? How does one manage their
   er image and container runtime format specs. CRI-O now          lifecycle, and so on?
   provides a lightweight alternative to using Docker as the
   runtime for Kubernetes orchestration.                             The real power of containers comes about
• The third storyline is security, with widespread recognition      when you start to run many containers simul-
   of the need to secure containers on multiple levels against       taneously and hook them together into a more
   threats caused by unpatched upstream code, attacks on
                                                                     powerful application. The problem with setting
   the underlying platform, and production software that isn’t
                                                                     up a multi-container application is the complex-
   quickly updated.
                                                                     ity quickly grows and wiring it up using simple
Let’s take a look at how these storylines are playing out in
the open source world.                                               Docker commands falls apart. How do you man-
                                                                     age the placement or orchestration of contain-
                                                                     er applications across a cluster of nodes with
                                                                     limited resources? How does one manage their
                                                                     lifecycle, and so on?

                                                                     A variety of orchestration and container scheduling proj-
                                                                   ects have tried to tackle this basic problem; one is Kuber-
                                                                   netes, which came out of Google’s internal container work
                                                                   (known as Borg). Kubernetes has continued to rapidly add
                                                                   technical capabilities.
                                                                     However, as Anurag Gupta writes in Why is Kubernetes so
                                                                   popular? [2], it’s not just about the tech. He says:



Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                           39
W O R K I N G
                       ..............   ..
                              .............
    One of the reasons Kubernetes surged past                             In this way, CRI-O allows for mixing and matching different
    these other systems is the community and sup-                       layers of the container software stack.
    port behind the system: It’s one of the largest                       A more recent community project is Buildah. It uses the
                                                                        underlying container storage to build the image and does
    open source communities (more than 27,000+
                                                                        not require a runtime. As a result, it also uses the host’s
    stars on GitHub); has contributions from thou-
                                                                        package manager(s) to build the image, and therefore the
    sands of organizations (1,409 contributors); and
                                                                        resulting images can be much smaller while still meeting the
    is housed within a large, neutral open source                       OCI spec. William Henry’s Getting started with Buildah [6]
    foundation, the Cloud Native Computing Foun-                        (published on Project Atomic) provides additional detail.
    dation (CNCF).                                                        As William and I discuss in our free e-book From Pots and
                                                                        Vats to Programs and Apps: How software learned to pack-
      Google’s Sarah Novotny offers further insights into what          age itself (PDF) [7], the larger point here is that OCI stan-
  it’s taken to make Kubernetes into a vibrant open source              dardization has freed up a lot of innovation at higher levels of
  community; her remarks in an April 2017 podcast are sum-              the software stack. Much of the image building, registry pull
  marized in How Kubernetes is making contributing easy [3].            and push services, and container runtime service are now
  She says it starts “with a goal of being a successful project,        automated by higher level tools like OpenShift.
  so finding adoption, growing adoption, finding contributors,
  growing the best toolset that they need or a platform that            Container security at many levels
  they need and their end users need. That is fundamental.”             Container security happens at many levels; Daniel Oh
                                                                        counts 10 layers of Linux container security [8]. It starts
  Standardization and decoupling                                        at the familiar infrastructure level. This is where technical
  The OCI, part of the Linux Foundation, launched in 2015               features like SELinux, cgroups, and seccomp come in. Se-
  “for the express purpose of creating open industry standards          curity of the platform is just one reason I say the operating
  around container formats and runtime.” Currently there are            system matters even more in 2017 [9] across many aspects
  two specs: Runtime and Image, and both specs released                 of containers.
  version 1.0 in 2017.                                                     However, Daniel also observes the many other container
     The basic idea here is pretty simple. By standardizing at          layers you need to consider. “What’s inside your container
  this level, you provide a sort of contract that allows for inno-      matters.” He adds that “as with any code you download from
  vation in other areas.                                                an external source, you need to know where the packages
     Chris Aniszczyk, executive director of the OCI, put it this        originated, who built them, and whether there’s any mali-
  way [4] in our conversation at the Open Source Leadership             cious code inside them.”
  Summit in February 2017:                                                 Perhaps less familiar to traditional software development
                                                                        processes is securing the build environment, the software
    People have learned their lessons, and I think                      deployment pipeline itself. Daniel notes,
    they want to standardize on the thing that will
    allow the market to grow. Everyone wants con-                         managing this build process is key to securing
    tainers to be super?successful, run everywhere,                       the software stack. Adhering to a ‘build once,
    build out the business, and then compete on the                       deploy everywhere’ philosophy ensures that the
    actual higher levels, sell services and products                      product of the build process is exactly what is
    around that. And not try to fragment the market                       deployed in production. It’s also important to
    in a way where people won’t adopt containers,                         maintain the immutability of your containers—in
    because they’re scared that it’s not ready.                           other words, do not patch running containers;
                                                                          rebuild and redeploy them instead.
     Here are a couple of specific examples of what this ap-
  proach makes possible.                                                  Still other areas of concern include securing the Kuberne-
     The CRI-O project started as a way to create a minimal             tes cluster, isolating networks, securing both persistent and
  maintainable runtime dedicated to Kubernetes. As Mrunal Pa-           ephemeral storage, and managing APIs.
  tel describes in CRI-O: All the runtime Kubernetes needs [5]:
                                                                        Onward to 2018
    CRI-O is an implementation of the Kubernetes CRI                    I expect all three of these areas to remain important top-
    [Container Runtime Interface] that allows Kubernetes                ics in 2018. However, I think one of the biggest stories will
    to use any OCI-compliant runtime as the container                   be the continued expansion of the open source ecosys-
    runtime for running pods... It is a lightweight alternative         tem around containers. The landscape document from the
    to using Docker as the runtime for Kubernetes.                      Cloud Native Computing Foundation [10] gives some sense



  40                                                                 Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
of the overall scope, but it includes everything from the con-          [8]	 https://opensource.com/article/17/10/10-layers-container-
tainer runtimes to orchestration to monitoring to provision-                  security
ing to logging to analysis.                                             [9] https://opensource.com/16/12/yearbook-why-operating-
   It’s as good an illustration as any of the level of activity tak-          system-matters
ing place in the open source communities around containers              [10]	https://github.com/cncf/landscape
and the power of the open source development model to
create virtuous cycles of activity.                                     Author
                                                                        Gordon Haff is a Red Hat technology evangelist and fre-
Links                                                                   quent and highly acclaimed speaker at customer and in-
[1]	https://opensource.com/article/17/7/how-linux-containers-          dustry events, and helps develop strategy across Red Hat s
     evolved                                                            full portfolio of cloud solutions. He is the co-author of Pots
[2]	https://opensource.com/article/17/10/why-kubernetes-so-            and Vats to Computers and Apps: How Software Learned
     popular                                                            to Package Itself, in addition to numerous other publica-
[3]	https://opensource.com/article/17/4/podcast-kubernetes-            tions. Prior to Red Hat, Gordon wrote hundreds of research
     sarah-novotny                                                      notes, was frequently quoted in publications like The New
[4]	http://bitmason.blogspot.com/2017/02/podcast-open-                 York Times on a wide range of IT topics, and advised clients
     container-initiative-with.html                                     on product and marketing strategies. Earlier in his career,
[5]	https://opensource.com/article/17/12/cri-o-all-runtime-            he was responsible for bringing a wide range of comput-
     kubernetes-needs                                                   er systems, from minicomputers to large UNIX servers, to
[6] http://www.projectatomic.io/blog/2017/11/getting-started-          market while at Data General. Gordon has engineering de-
     with-buildah/                                                      grees from MIT and Dartmouth and an MBA from Cornell s
[7] https://goo.gl/FSfgky                                              Johnson School.




                                      Keep in touch!
           Sign up to receive roundups of our best articles,
          giveaway alerts, and community announcements.
        Visit opensource.com/email-newsletter to subscribe.




Open Source Yearbook 2017       . CC BY-SA 4.0 . O      pensource.com                                                                41
                           Best Trio of 2017
SpamAssassin, MIMEDefang,
and Procmail                                                                                                     BY DAVID BOTH




OUR ANNUAL                    “BEST COUPLE” AWARD has expand-
                              ed to a trio of applications that
combine to manage server-side email sorting beautifully.
                                                                        and I use email filters on our computers. When we travel or
                                                                        use our handheld devices, those filters don’t always work
                                                                        because Thunderbird—or any other email client with filters—
  In 2015 [1] and 2016 [2], I awarded “Best Couple” to two              must be running on my computer at home in order to perform
open source commands or program types that, combined,                   the filtering tasks. I can set up filters on my laptop to sort
make my world a better place. This year, the “Best Couple”              email when I’m traveling, but that means I have to maintain
prize has turned into the “Best Trio,” because resolving the            multiple sets of filters.
problem I set out to fix—effective server-side email sort-                 There was also a technical problem I wanted to fix. Cli-
ing—took three pieces of software working together. Here’s              ent-side email filtering relies on scanning messages after
how I got everything to work using SpamAssassin, MIMEDe-                they are deposited in the inbox. For some unknown rea-
fang, and Procmail, three common and freely available open              son, sometimes the client does not delete (expunge) the
source software packages.                                               moved messages from the inbox. This may be an issue with
                                                                                                         Thunderbird (or it may be a
The problem                                                                                              problem with my configura-
To make managing my email                                                                                tion of Thunderbird). I have
easier, I like to sort incoming                                                                          worked on this problem for
messages into a few folders                                                                              years with no success, even
(in addition to the inbox).                                                                              through multiple complete
Spam is always filed into                                                                                re-installations of Fedora
the spam folder, and I look                                                                              and Thunderbird.
at it every couple of days in                                                                               Additionally, spam is a
case something I want was                                                                                major problem for me. I
marked as spam. I also sort                                                                              have my own email server,
email from a couple of other                                                                             and I use several email ad-
sources into specific folders.                                                                           dresses. I have had some of
Everything else is filed into the inbox by default.                     those email accounts for a couple decades, and they have
   A quick word about terminology to begin: Sorting is the pro-         become major spam magnets. In fact, I regularly get be-
cess of classifying email and storing it in an appropriate folder.      tween 1,200 and 1,500 spam emails each day—my record
Filters like SpamAssassin [3] classify the email. MIMEDefang            is just over 2,500 spam emails in a single day—and the
[4] uses that classification to mark a message as spam by               numbers keep increasing.
adding a text string to the subject line. That classification al-          To solve my problems, I needed a method for filing emails
lows other software to file the email into the designated fold-         (i.e., sorting them into appropriate folders) that was serv-
ers. I had been using those two applications, and I needed              er-based rather than client-based. This would solve several
software to do this last bit—the one that does the filing.              issues: I wouldn’t need to leave an email client running on
   I have several email filters set up in Thunderbird [5], the          my home workstation just to perform filtering. I wouldn’t have
best client I have found for my personal needs. Both my wife            to delete or expunge messages—especially spam—from our



42                                                                   Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
                                                                                                                                      BEST
                                                                                                                                      TRIO
                                                                                                                                      2017

                                                  inboxes. And I wouldn’t need to configure filters in multiple
                                                  location—sI would need them in only one location, the server.

                                                  My email server
                                                  I chose Sendmail as my email server in about 1997, when I
                                                  switched from OS/2 to Red Hat Linux 5, as I’d already been
                                                  using it for several years at work. It’s been my mail transfer
                                                  agent (MTA) [6] ever since, for both business and personal
                                                  use. (I don’t know why Wikipedia refers to MTA as a “message”
                                                  transfer agent, when all my other references say “mail” transfer
                                                  agent. The Talk tab of the Wikipedia page has a bit of discus-
                                                  sion about this, which generated even more confusion for me.)       This set of objectives meant that I would need a sorting pro-
                                                     I’ve been using SpamAssassin and MIMEDefang togeth-              gram that integrates well with the parts I already have.
                                                  er to score and mark incoming emails as spam, placing a
                                                  known string in the subject, ###SPAM###, so that I can iden-        Procmail
                                                  tify and sort junk email both as a human and with software. I       After extensive research, I settled on the venerable Proc-
                                                  use UW IMAP [7] for client access to emails, but that is not a      mail [8]. I know—more old stuff—and pretty much unsupported
                                                  factor in server-side filtering and sorting.                        these days, too. But it does what I need it to do and is known
                                                     Yes, I use a lot of old-school software for the server side      to work well with the software I am already using. It is stable
                                                  of email, but it is well known, it works well, and I understand     and has no known serious bugs. It can be configured for use
                                                  how to make it do the things I need it to do.                       at the system level as well as at the individual user level.
                                                                                                                          Red Hat and Red Hat-based distributions, such as Cen-
                                                  Project requirements                                                tOS and Fedora, use Procmail as the default mail delivery
                                                  I believe having a well-defined set of requirements is impera-      agent (MDA) [9] for SendMail, so it does not even need to
                                                  tive before starting a project. Based on my description of the      be installed; it is already there. My server runs CentOS, so
Ribbon Image: Graphics Provided by vecteezy.com




                                                  problem, I created five simple requirements for this project:       using Procmail is a real no-brainer.
                                                                                                                          In addition to delivering email, Procmail can be used to
                                                  1. Sort incoming spam emails into the spam folder on the           filter and sort it. Procmail rules (known as recipes) can be
                                                      server side using the identifying text that is already          used to identify spam and delete or sort it into a designated
                                                      being added to the subject line.                                mail folder. Other recipes can identify and sort other mail as
                                                  2. Sort other incoming emails into designated folders.             well. Procmail can be used for many other things besides
                                                  3. Circumvent problems with moved messages not being               sorting email into designated folders, such as automated for-
                                                      deleted or expunged from the inbox.                             warding, duplication, and much more. Those other tasks are
                                                  4. Keep the existing SpamAssassin and MIMEDefang                   beyond the scope of this article, but understanding sorting
                                                      software.                                                       should give you a better understanding of how to accomplish
                                                  5. Make sure any new software is easy to install and configure.    those other tasks.



                                                  Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                               43
How it works                                                             ac
                                                                           tion_change_header("Subject", "####Probably SPAM#### ($hits)
There are so many ways of using SpamAssassin, MIMEDe-                        $Subject");
fang, and Procmail together for anti-spam solutions, so I                ac
                                                                           tion_add_part($entity, "text/plain", "-suggest", "$report\n",
won’t go deeply into how to configure them. Instead, I will fo-              "SpamAssassinReport.txt", "inline");
cus on how I integrated these three packages to implement                } elsif ($hits >= 5) {
my own solution.                                                         ac
                                                                           tion_add_header("X-Spam-Status", "Possibly, score=$hits
   Incoming email processing begins with SendMail. I added                   required=$req tests=$names");
this line to my sendmail.mc configuration file:                          ac
                                                                           tion_change_header("Subject", "####Possibly SPAM#### ($hits)
                                                                             $Subject");
INPUT_MAIL_FILTER('mimedefang',                                         ac
                                                                           tion_add_part($entity, "text/plain", "-suggest", "$report\n",
  'S=unix:/var/spool/MIMEDefang/mimedefang.sock, T=S:5m;R:5m')dnl            "SpamAssassinReport.txt", "inline");
                                                                         } elsif ($hits >= 0.00) {
This line calls MIMEDefang as part of email processing. Be               ac
                                                                           tion_add_header("X-Spam-Status", "Probably not, score=$hits
sure to run the make command after making any configura-                     required=$req tests=$names");
tion changes to SendMail, then restart SendMail. (For more               # a
                                                                           ction_add_part($entity, "text/plain", "-suggest", "$report\n",
information, see Chapter 8 of SpamAssassin: A Practical                      "SpamAssassinReport.txt", "inline");
Guide to Integration and Configuration [10].)                            } else {
   SpamAssassin can run as standalone software in some                   # If score (hits) is less than or equal to 0
applications; however, in this environment, it is not run as a           ac
                                                                           tion_add_header("X-Spam-Status", "No, score=$hits required=$req
daemon, it is called by MIMEDefang, and each email is first                  tests=$names");
processed by SpamAssassin to generate a spam score for it.               # a
                                                                           ction_add_part($entity, "text/plain", "-suggest", "$report\n",
   SpamAssassin provides a default set of rules, but you can                 "SpamAssassinReport.txt", "inline");
modify the scores for existing rules, add your own rules, and            }
create whitelists and blacklists by modifying the /etc/mail/
spamassassin/local.cf file. This file can grow quite large;              Here’s the line in that code that changes the subject line of
mine is just over 70KB and still growing.                                the email:
   SpamAssassin uses the set of default and custom rules
and scores to generate a total score for each email. MI-                 action_change_header("Subject", "####SPAM#### ($hits) $Subject");
MEDefang uses SpamAssassin as a subroutine and re-
ceives the spam score as a return code.                                  Actually it calls another Perl subroutine to change the sub-
   MIMEDefang is programmed in Perl, so it is easy to hack.              ject line using the string I want to add as an argument, but
I have hacked the last major portion of the code in /etc/mail/           the effect is the same. The subject line now contains the
mimedefang-filter to provide a filtering breakdown with a lit-           string ####SPAM#### and the spam score (i.e., the variable
tle more granularity than the default. Here’s how this section           $hits). Having this known string in the subject line makes
of the code looks on my installation (I have made significant            further filtering easy.
changes to this portion of the code, so yours probably will not              The modified email is returned to SendMail for further pro-
look much like this):                                                    cessing, and SendMail calls Procmail to act as the MDA.
                                                                             Procmail uses global and user-level configuration files,
#################################################################        but the global /etc/procmailrc file and individual user
# Determine how to handle the email based on its spam score and #        ~/.procmailrc files must be created. The structure of the
# add an appropriate X-Spam-Status header and alter the subject.#        files is the same, but the global file operates on all incoming
#################################################################        email, while local files can be configured for each individual
# Set required_hits in sa-mimedefang.cf to get value for $req     #      user. Since I don’t use a global file, all the sorting is done on
#################################################################        the user level. My .procmailrc file is simple:
if ($hits >= $req) {
ac
  tion_add_header("X-Spam-Status", "Spam, score=$hits                    # .procmailrc file for david@both.org
  required=$req tests=$names");                                          # Rules are run sequentially - first match wins
action_change_header("Subject", "####SPAM#### ($hits) $Subject");
ac
  tion_add_part($entity, "text/plain", "-suggest", "$report\n",          PATH=/usr/sbin:/usr/bin
  "SpamAssassinReport.txt", "inline");                                   MAILDIR=$HOME/mail #location of your mailboxes
# action_discard();                                                      DEFAULT=/var/spool/mail/david
} elsif ($hits >= 8) {
ac
  tion_add_header("X-Spam-Status", "Probably, score=$hits                # Send Spam to the spam mailbox
  required=$req tests=$names");                                          # This is my new style SPAM subject




44                                                                    Open Source Yearbook 2017       . CC BY-SA 4.0 . O   pensource.com
:0                                                                      The last recipe drops all email that does not match another
* ^Subject:.*####SPAM####                                            recipe into the default folder, usually the inbox.
$HOME/spam                                                              Having the .procmailrc file in my home directory does not
                                                                     cause Procmail to filter my mail. I have to add one more file,
# 
  Political stuff goes here. Must be using my political email        the following ~/.forward file, which tells Procmail to filter all
# address                                                            of my incoming email:
:0
* ^To:.*political                                                    # .forward file
$HOME/Political                                                      # 
                                                                       process all incoming mail through procmail - see .procmailrc
                                                                     # for the filter rules.
# SysAdmin stuff goes here. Usually system log messages              |/usr/bin/procmail
:0
* ^Subject:.*(Logwatch|rkhunter|Anacron|Cron|Fail2Ban)               It is not necessary to restart either SendMail or MIMEDefang
$HOME/AdminStuff                                                     when creating or modifying the Procmail configuration files.
                                                                         For more detail about the configuration of Procmail and
# drops messages into the default box                                creation of recipes, see the SpamAssassin book [11] and the
:0                                                                   Procmail [12] information in the RHEL Deployment Guide.
* .*
                                                                     A few additional notes
Note that the .procmailrc file must be located in my email ac-       Note that MIMEDefang must be started first, before Send-
count’s home directory on the email server, not in the home          Mail, so it can create the socket where SendMail sends
directory on my workstation. Because most email accounts             emails for processing. I have a short script (automate every-
are not login accounts, they use the nologin program as the          thing!) I use to stop and restart SendMail and MIMEDefang
default shell, so an admin must create and maintain these            in the correct order so that new or modified rules in the local.
files. The other option is to change to a login shell, such as       cf file take effect.
Bash, and set passwords so that knowledgeable users can                 I already have a large body of rules and score modifiers
log in to their email accounts on the server and maintain their      in my SpamAssassin local.cf file so, although I could have
.procmailrc files.                                                   used Procmail by itself for spam filtering and sorting, it would
    Each Procmail recipe starts with :0 (yes, that is a zero) on     have taken a lot of work to convert all of those rules. I also
the first line and contains a total of three lines. The second       think SpamAssassin does a better job of scoring because it
line starts with * and contains a conditional statement con-         does not rely on a single rule to match, but rather the aggre-
sisting of a regular expression (regex) that Procmail com-           gate score from all the rules, as well as scores from Bayes-
pares to each line in the incoming email. If there is a match,       ian filtering.
Procmail sorts the email into the folder specified by the third         Procmail works very well when matches can be made very
line. The ^ symbol denotes the beginning of the line when            explicit with known strings, such as the ones I have config-
making the comparison.                                               ured MIMEDefang to place in the subject line. I think Proc-
    The first recipe in my .procmailrc file sorts the spam iden-     mail works better as a final sorting stage in the spam-filtering
tified in the subject line by MIMEDefang into my spam folder.        process than as a complete solution by itself. That said, I
The second recipe sorts political email (identified by a special     know that many admins have made complete spam filtering
email address I use for my volunteer work for various political      solutions using nothing more than Procmail.
organizations) into its own folder. The third recipe sorts the          Now that I have server-side filtering in place, I am some-
huge amount of system emails I receive from the many com-            what less limited in my choice of email clients, because I no
puters I deal with into a mailbox for my system administrator        longer need a client that performs filtering and sorting. Nor
duties. This setup makes those emails very easy to find.             do I need to leave an email client running all the time to per-
    Note the use of parentheses to enclose a list of strings to      form that filtering and sorting.
match. Each string is separated by a vertical bar, aka the pipe
( | ), which is used as a logical “or.” So the conditional line      Reports of Procmail’s demise are greatly
                                                                     exaggerated
* ^Subject:.*(Logwatch|rkhunter|Anacron|Cron|Fail2Ban)               In my research for this article, I found a number of Google
                                                                     results (dating from 2001 to 2013) that declared Procmail
reads, “if the Subject line contains Logwatch or rkhunter or ...     to be dead. Evidence includes broken web pages, missing
or Fail2Ban.” Since Procmail ignores case, there is no need          source code, and a sentence on Wikipedia [13] that declares
to create recipes that look for various combinations of upper        Procmail to be dead and links to more recent replacements.
and lower case.                                                      However, all Red Hat, Fedora, and CentOS distributions



Open Source Yearbook 2017      . CC BY-SA 4.0 . O    pensource.com                                                                 45
install Procmail as the MDA for SendMail. The Red Hat, Fe-         [11]	 https://www.packtpub.com/networking-and-servers/
dora, and CentOS repositories all have the source RPMs for                spamassassin-practical-guide-integration-and-
Procmail, and the source code is also on GitHub [14].                     configuration
   Considering Red Hat’s continued use of Procmail, I have         [12]	https://access.redhat.com/documentation/en-US/Red_
no problem using this mature software that does its job                   Hat_Enterprise_Linux/6/html/Deployment_Guide/s1-email-
silently and without fanfare.                                             mda.html
                                                                   [13]	https://en.wikipedia.org/wiki/Procmail
Resources                                                          [14]	https://github.com/Distrotech/procmail
• SpamAssassin: A Practical Guide to Configuration, Cus-          [15]	https://www.packtpub.com/networking-and-servers/
   tomization, and Integration [15] also contains information             spamassassin-practical-guide-integration-and-
   about MIMEDefang and Procmail                                          configuration
• SpamAssassin [16] on Wikipedia                                  [16]	https://en.wikipedia.org/wiki/SpamAssassin
• MIMEDefang [17] on Wikipedia                                    [17] https://en.wikipedia.org/wiki/MIMEDefang
• Red Hat’s Procmail [18] documentation                           [18]	https://access.redhat.com/documentation/en-US/Red_
• Procmail FAQ [19]                                                      Hat_Enterprise_Linux/6/html/Deployment_Guide/s1-email-
                                                                          mda.html
Links                                                              [19]	http://www.iki.fi/~era/procmail/mini-faq.html
[1]	https://opensource.com/business/15/12/best-couple-2015-
     tar-and-ssh                                                   Author
[2]	https://opensource.com/article/16/12/yearbook-best-           David Both is a Linux and Open Source advocate who re-
     couple-2016-display-manager-and-window-manager                sides in Raleigh, North Carolina. He has been in the IT in-
[3]	https://en.wikipedia.org/wiki/SpamAssassin                    dustry for over forty years and taught OS/2 for IBM, where
[4] https://en.wikipedia.org/wiki/MIMEDefang                      he worked for over 20 years. While at IBM, he wrote the
[5]	 https://opensource.com/sitewide-search?search_api_           first training course for the original IBM PC in 1981. He has
      views_fulltext=thunderbird                                   taught RHCE classes for Red Hat and has worked at MCI
[6]	 https://en.wikipedia.org/wiki/Message_transfer_agent         Worldcom, Cisco, and the State of North Carolina. He has
[7]	 https://en.wikipedia.org/wiki/UW_IMAP                        been working with Linux and Open Source Software for al-
[8]	 https://en.wikipedia.org/wiki/Procmail                       most 20 years. David has written articles for OS/2 Magazine,
[9] https://en.wikipedia.org/wiki/Mail_delivery_agent             Linux Magazine, Linux Journal and OpenSource.com. His
[10]	https://www.packtpub.com/networking-and-servers/             article “Complete Kickstart,” co-authored with a colleague at
      spamassassin-practical-guide-integration-and-                Cisco, was ranked 9th in the Linux Magazine Top Ten Best
      configuration                                                System Administration Articles list for 2008.




46                                                              Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
                                                                            ....................        COLL ABORATING




Creative Commons: 1.2 billion
strong and growing                                                                                                  BY BEN COTTON

Creative Commons shares 2016 State of the Commons report, and here are a few highlights.


“THE STATE                   OF THE COMMONS IS STRONG.” The 2016
                             State of the Commons [1] report,
issued by Creative Commons [2], does not begin with those
                                                                       er modeling have brought a whole new dimension to creative
                                                                       works. The British Museum released 128 models to the 3D
                                                                       online and VR sharing platform Sketchfab [11]. This allows
words, but it could. The report shows an increase in adoption for      unprecedented access to the museum’s collection. And if
the suite of licenses, but that is not the whole story. As Creative    you’d like to have a candy dish shaped like a human skull,
Commons CEO Ryan Merkley says in the report, “CC’s new or-             there’s a design [12] for that on Thingiverse. Five levels of
ganizational strategy is focused on increasing the vibrancy and        community-remix produced this thing, one of 1.6-million
usability of the commons (not just its breadth and volume).” Cre-      CC-licensed objects on the popular 3D printing site.
ative Commons produces and maintains a suite of licenses fo-              All told, Creative Commons counts 1,204,935,089 CC-li-
cused on enabling people to share their knowledge and creative         censed works (up from 1.1 billion in 2015). Of those, nearly 93
works, as well as developing a community around this idea.             million are in the public domain. Sixty-five percent of works use
   Many of the projects involving the use of Creative Commons          a “free culture” license (CC0, CC BY, CC BY-SA, and other pub-
licenses have an educational focus. For example, the South             lic domain tools), which allow remix and commercial uses.
Asian Journal of Management [3], a leading journal according              Thirty volunteers worked to translate the 2016 State of the
to ScholarOne of Thomson Reuters, uses Creative Commons                Commons report into 12 languages for today’s release. The
licenses. PLOS ONE [4], produced by the Public Library of              report is available on the Creative Commons website [13].
Sciences, is the first multidisciplinary open access journal.          Discuss it in the comments below and on social media with
Five percent PLOS ONE articles use the CC0 license—a uni-              the hashtag “#sotc”.
versal dedication to the public domain. At the UCSF School
of Medicine, Dr. Amin Azzam’s medical students contribute to           Links
Wikipedia articles. Wikipedia itself is probably the best-known        [1]	 https://stateof.creativecommons.org
CC-licensed project in the world. Wikipedia’s 42.5 million arti-       [2]	 https://creativecommons.org
cles in 294 languages are entirely licensed under the Creative         [3]	 http://iqra.edu.pk/isl/sajms/
Commons Attribution-ShareAlike (CC BY-SA) license.                     [4]	 http://journals.plos.org/plosone/
   In addition to education, cultural and artistic works are high-     [5]	https://commons.wikimedia.org/wiki/
lighted in this year’s report. The African Storybook initiative              Category:Majalah_Horison
has a collection of thousands of picture books for children pro-       [6]	http://www.metmuseum.org
vided under an open access model. 264 back issues, span-               [7] https://support.google.com/youtube/answer/2797468?hl=en
ning 1960-1990 [5], of the Indonesian literary magazine Hor-           [8] https://vimeo.com/creativecommons
ison were digitized in cooperation with CC Indonesia. These            [9]	https://www.flickr.com/
issues are now being incorporated into school curricula. At the        [10]	https://medium.com
Metropolitan Museum of Art [6] in New York, 375,000 digital            [11]	https://sketchfab.com/britishmuseum
works have been released into the public domain via CC0.               [12]	https://www.thingiverse.com/thing:27944
   User-produced visual media is another strong part of the            [13]	https://stateof.creativecommons.org
commons. YouTube [7] has 30-million CC-licensed videos,                [14]	http://www.cyclecomputing.com
and Vimeo [8] adds another 5.8 million. Additionally, photo
sharing site Flickr [9] has 381,101,415 CC-licensed or public          Author
domain images, including 4.5-million public domain images              Ben Cotton is a meteorologist by training and a high-perfor-
added in the past year. When it comes to the written word,             mance computing engineer by trade. Ben works as a techni-
the popular Medium [10] boasts 257,000 CC-licensed posts.              cal evangelist at Cycle Computing [14]. He is a Fedora user
(Also worth noting is that Opensource.com is also a partici-           and contributor, co-founded a local open source meetup
pant here, with a default CC BY-SA license on articles.)               group, and is a member of the Open Source Initiative and
   Traditional media does not have an exclusive claim to               a supporter of Software Freedom Conservancy. Find him on
membership in the commons. Both 3D printing and comput-                Twitter (@FunnelFiasco) or at FunnelFiasco.com.



Open Source Yearbook 2017       . CC BY-SA 4.0 . O     pensource.com                                                                 47
COLL ABORATING
                                  ..............   ..
                                         .............


 24                           Pull Requests
                               challenge encourages
                               fruitful contributions                                                           BY BEN COTTON
  16,720 pull requests were opened. Of those, 10,327 were merged and 1,240 were closed.


  IN 2012,             Andrew Nesbitt was inspired by the 24
                       Ways to impress your friends [1] advent
  calendar to start a new project: 24 Pull Requests [2], an open
                                                                       handle the incoming requests. In addition, participants can
                                                                       suggest other featured projects based on the same criteria.
                                                                       Core members check to ensure the suggested project is ac-
  source contribution event. Participants are challenged to            tive so that participants don’t waste time creating a pull re-
  open one pull request for an open source project on GitHub           quest that will never get merged. The contribulator [3] tool,
  every day from December 1 through December 24.                       developed by Nesbitt and others, scores projects based on
     It’s a lofty goal, to be sure. Of the 1,877 participants who      criteria that indicate how welcoming the project is.
  submitted at least one pull request in the 2016 event, only             Projects are searchable by GitHub issue type (e.g. “bug” and
  36 were able to meet the challenge. But that’s okay, Nesbitt         “documentation”) as well as by language, which makes it easy to
  says, as any participation “helps to make the world of open          find a starting point. For many open source newcomers, that can
  source a better place.” The total count from participants was        be the hardest part [4]. Although Nesbitt has no firm numbers, he
  16,720 opened pull requests. Of those, 10,327 have been              has heard anecdotally of people who made their first pull request
  merged and 1,240 were closed. This suggests the pull re-             as part of 24 Pull Requests participation who went on to become
  quests were of a useful nature, by and large.                        regular open source contributors. In 2015, Victoria Holland wrote
     Driving high-quality contributions is a key focus of 24 Pull      for Opensource.com [5] that the 24 Pull Requests event inspired
  Requests. Unlike other contribution challenges, no prizes are        her to make her first contributions to other projects.
  offered. This is the result of a conscious decision to prevent          As the premise suggests, 24 Pull Requests has entered a
  participants from trying to game the system by submitting triv-      state of hibernation until December 1. However, the code is on
  ial requests that just add overhead for project maintainers. In      GitHub [6] ready for improvements and suggestions year-round.
  addition, the leader boards on the 24 Pull Requests website
  are randomized in order avoid encouraging point-driven de-           Links
  velopment. Nesbitt told Opensource.com, “Participants in-            [1]	https://24ways.org/
  stead are rewarded by learning how to contribute to an open          [2]	https://24pullrequests.com
  source project, making new friends and feeling like they have        [3]	https://github.com/24pullrequests/contribulator
  given back to the communities that they are part of.”                [4]	https://opensource.com/life/16/11/guide-beginner-
     The core team selects featured projects based on com-                  contributors
  munities they know are welcoming for new contributors and            [5]	https://opensource.com/life/15/2/the-pull-requests-
  have maintainers available during the month of December to                challenge
                                                                       [6]	https://github.com/24pullrequests/24pullrequests
                                                                       [7] http://www.cyclecomputing.com

                                                                       Author
                                                                       Ben Cotton is a meteorologist by training and a high-perfor-
                                                                       mance computing engineer by trade. Ben works as a tech-
                                                                       nical evangelist at Cycle Computing [7]. He is a Fedora user
                                                                       and contributor, co-founded a local open source meetup
                                                                       group, and is a member of the Open Source Initiative and
                                                                       a supporter of Software Freedom Conservancy. Find him on
                                                                       Twitter (@FunnelFiasco) or at FunnelFiasco.com.



  48       Open Source Yearbook 2017      .O   pensource.com        Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
                                                                             ....................       COLL ABORATING




   Openness is key to
   working with Gen Z
                                                                                    BY JEN KELCHNER

   Members of Generation Z operate openly by default. Are you ready to work with them?




   LEADERS AND MANAGERS                                 every-
                                                        where
   collectively groan with the thought of a new cohort to man-
                                                                       war. They experience heightened security when traveling or
                                                                       even attending a local sporting event. Terrorism is a real and
                                                                       constant threat—both domestically and globally.
   age. Boomers and Gen Xers typically try to align the new               The second force shaping their identity is growing up
   kids on the block with Millennials—which would be a mis-            through the Great Recession [2]. If they didn t experience
   take. While Gen Z and Millennials have similarities, their          the economic downturn firsthand, they were likely direct-
   motivators and influencers are vastly different. Each of the        ly connected to it through a close friend or relative. Job
   differences affects attrac-                                                                         loss and foreclosures were
   tion, recruitment, and reten-                                                                       rampant. It was possible to
   tion of Gen Z talent.                                                                               have experienced a loss of
      Could open organizational                                                                        their home or even a par-
   models be the keys to seeing                                                                        ent during these tumultuous
   this generation excel in the                                                                        times. This generation has
   workplace?                                                                                          felt the high cost of stress to
      Let’s take a look.                                                                               the family unit.

   Shaping a generation                                                                                 The struggle is real
   Cohort birth dates are always                                                                       Gen Z are neck and neck
   controversial. However, the                                                                         with Millennials for having
   consensus seems to hold                                                                             stress levels higher than any
   that Gen Z consists of people born between 1996 and 2012.           other generation at this time. Let that sink in: They re dealing
   This places the oldest members of the group at 21 years old,        with unhealthy levels of stress—daily. Because they came of
   which means you are likely already working alongside one. By        age during economic decline, job insecurity, and increasing
   2020, Gen Z is expected to grow to 2.56B globally [1].              inequality, they often have trouble seeing how they can suc-
     And yet, each generation is defined by more than a span           ceed as adults.
                                                 of years. Key ex-       That stress [3] manifests in several ways for members of
                                                 ternal influencers    Generations Z:
Two major factors influence                      shape and mo-
                                                 tivate a group,       •    9% worry about getting a job
                                                                           7
Generation Z: They’ve grown                      and these are         •   72% worry about debt
up in a post-9/11 world and                      where anyone          •    70% say inequality worries them greatly
                                                 must start before     •     70% say terrorism concerns them
through the Great Recession.                     leading one.
     Two major factors influence Generation Z: They ve grown           All this is important for business leaders to understand, as the
   up in a post-9/11 world and through the Great Recession.            business impact of stress costs our organizations a great deal
     While even the oldest in the generation do not recall the         (both financially and in terms of productivity). Because mem-
   day of 9/11, they fully understand its impact. They see a day       bers of Generation Z harbor increased anxiety, pessimism, a
   when terror dropped to their doorsteps and their innocence          distrust for government, awareness of social unfairness, and
   was lost. For their entire life spans, our country has been at      the like, they are deeply driven by a desire for transparency,



   Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                                49
COLL ABORATING
                                   ..............   ..
                                          .............
  authenticity, and genuine connections with community. We               • C onnect to purpose and mission. It won’t be enough
  have an opportunity to create work environments that curate               that your company has a social good impact; they want to
  amazing talent—especially in open organizations.                          know their personal contribution delivers that as well.
                                                                         • Expectations of their leaders. In order of importance,
  Thriving in the open                                                      leaders should clearly communicate, be supportive, and
  As an evangelist for open principles, I firmly believe that open-         be honest.
  ness would allow for Gen Z to thrive in the workplace. This            • Diversity and inclusion matter. They are the most di-
  is a generation poised to be disruptors like we have never                verse generation with an expectation and belief in diversity.
  seen. Closely aligned with the Silent Generation [4], they will        • Mobility is critical. They plan to work outside of the U.S.
  be makers, creators, inventors and social problem solvers.                and travel frequently. Creating structures necessary for
     Because of Generation Z s need to control environments                 this to happen will become a key to retention.
  and financial opportunities, we ll see more of this generation
  becoming entrepreneurs and freelance contributors. Creat-              As you begin to build relationships with members of Gen Z,
                                          ing a workplace culture        above all, enter into those relationships assuming positive
                                          of transparency, contri-       intent. Take time to understand their motivations and get to
This is a generation                      bution, collaboration,         know them. Offer them respect, as they will be quick to return
                                          meritocracy, diversity         it. Include them in idea sessions; they will surprise you. Be
poised to be disruptors                   and inclusivity will at-       appreciative of their work ethics—even though it might not
like we have never seen.                  tract and engage this          be the old-school 8-5 you ve always known. Don t dismiss
                                          generation. Individuals        them as children; they are incredibly bright, insightful, and
  from all generations are seeking these principles—but open-            globally connected game-changers.
  ness seems to be inherent to Gen Z.                                        The best management advice [6] on Gen Z, you ask?
     To accommodate that orientation toward openness, lead-                  Empower them and partner alongside them as they create
  ers should be aware of several points:                                 new solutions for a globally connected world.

  • D iversity and inclusivity: Gen Z is multicultural and the
     last generation to be majority caucasian (52.9% [5]).               Links
  • Collaboration and community: This generation has                    [1]	https://www.fbicgroup.com/sites/default/files/Gen%20
     been globally connected since toddlerhood and engages                    Z%20Report%202016%20by%20Fung%20Global%20
     in global, remote conversations throughout the day.                      Retail%20Tech%20August%2029,%202016.pdf
  • Transparency: Access to media and a heavy leaning to-               [2]	https://en.wikipedia.org/wiki/Great_Recession
    wards realism has them grounded and cautious to trust                [3] http://www.inc.com/jessica-stillman/gen-z-is-anxious-
    others, which is why this principle is crucial.                           distrustful-and-often-downright-miserable-new-poll-
  • A daptability: Real-time access to data has developed the                reveals.html
    expectation to have access anywhere at anytime in order              [4]	https://en.wikipedia.org/wiki/Silent_Generation
    to understand and make decisions on the fly.                         [5]	http://www.mediapost.com/publications/article/257641/
  • Meritocracy: Gen Z wants to contribute to something                      multiracial-gen-z-and-the-future-of-marketing.html
     that makes a difference. Members also want to earn their            [6]	https://ldr21.com/gen-z-entering-the-work-place/
     place, with contribution making meritocracy an attractive           [7] http://ldr21.com
     environment.                                                        [8] http://dragonfli.co

  Where to start                                                         Author
  The quick run-down for attracting, recruiting and engaging             Jen Kelchner is the co-founder & CEO of LDR21 [7] and
  Gen Z in the workplace:                                                co-creator of dragonfli [8], a groundbreaking platform for
                                                                         building, measuring and tracking human agility in the work-
  • R ewards should be monetary. Bonuses are expected to                place. She advises leaders on organization and culture
     be in real-time—so think cash, not 401K matching.                   change based on open organization principles. She is a
  • O ffer tours of duty. They want to solve problems collab-           founding member of the Forbes Coaches Council, Deloitte
     oratively in a project-based environment more than they             alumn, and member of the Open Organization Ambassa-
     want set and repetitive job.                                        dor team. Her recent contributions to the opensource.com
  • Real-time, face-to-face feedback. They want face time               community are a monthly column, Open Organization defi-
     even if delivered virtually.                                        nition, Open Organization maturity model, and the Open
  • As true digital natives, STEM capabilities are important            Organization Workbook. Jen’s personal Open Index score
     to them.                                                            is: 93%.



  50                                                                  Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
                                                                                        ....................       LEARNING




5             big ways AI is rapidly
              invading our lives
                                                                           BY RIKKI ENDSLEY


Let’s look at five real ways we’re already surrounded by artificial intelligence.




OPEN SOURCE                     PROJECTS are helping drive [1]
                                artificial intelligence advance-
ments, and we can expect to hear much more about how AI
                                                                    explains. That AI and millennials are the future of marketing
                                                                    is no huge surprise, but Gen Xers and Baby Boomers, you’re
                                                                    not off the hook yet.
impacts our lives as the technologies mature. Have you con-            AI is being used to target entire groups—including cities—of
sidered how AI is changing the world around you already?            people based on behavior changes. For example, an article
Let’s take a look at our increasingly artificially enhanced uni-    on Raconteur [4], “How AI will change buyer behaviour [5],”
verse and consider the bold predictions about our AI-influ-         explains that the biggest strength of AI in the online
enced future.                                                       retail industry is
                                                                    its ability to adapt
1. AI influences your purchasing decisions                          quickly to fluid situ-   AI is being used to target
A recent story on VentureBeat [2], “How AI will help us deci-       ations that change
pher millennials [3],” caught my eye. I confess that I haven’t      customer behavior.
                                                                                              entire groups–including
given much thought to artificial intelligence—nor have I had a      Abhinav Aggarw-            cities–of people based
hard time deciphering millennials—so I was curious to learn         al, chief executive
more. As it turns out, the headline was a bit misleading; “How      of artificial intelli-
                                                                                                 on behavior changes.
to sell to millennials” would have been a more accurate title.      gence startup Fluid AI [6], says that his company’s software
   According to the article, the millennial generation is a “the    was being used by a client to predict customer behavior, and
demographic segment so                                                                                 the system noticed a change
coveted that marketing man-                                                                            during a snow storm. “Users
agers from all over the globe                                                                          who would typically ignore
are fighting over them.” By an-                                                                        the e-mails or in-app notifi-
alyzing online behavior—be                                                                             cations sent in the middle of
it shopping, social media,                                                                             the day were now opening
or other activities—machine                                                                            them as they were stuck at
learning can help predict be-                                                                          home without much to do.
havioral patterns, which can                                                                           Within an hour the AI system
then turn into targeted adver-                                                                         adapted to the new situation
tising. The article goes on to                                                                         and started sending more
explain how the Internet of                                                                            promotional material during
Things and social media plat-                                                                          working hours,” he explains.
forms can be mined for data points. “Using machine learning            AI is changing how, why, and when we spend money, but
to mine social media data allows companies to determine how         how is it changing the way we earn our paychecks?
millennials talk about its products, what their sentiments are
towards a product category, how they respond to competitors         2. AI is changing how we work
advertising campaigns, and a multitude of other data that can       A recent Fast Company article, This is how AI will change
be used to design targeted advertising campaigns,” the article      your work in 2017 [7],” says that job seekers will benefit from



Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                                51
LEARNING
                    ..............   ..
                           .............
  artificial intelligence. The author explains that AI will be used      those raised their first equity funding round within the past
  to send job seekers alerts for relevant job openings, in ad-           couple of years. “19 out of the 24 companies under imaging
  dition to updates on salary trends, when you’re due for a              and diagnostics raised their first equity funding round since
  promotion, and the likelihood that you’ll get one.                     January 2015,” the article says. Other companies on the list
     Artificial intelligence also will be used by companies to           include those working on AI for remote patient monitoring,
  help on-board new talent. “Many new hires get a ton of in-             drug discovery, and oncology.
  formation during their first couple of days on the job, much              An article published on March 16 on TechCrunch that
  of which won’t get retained,” the article explains. Instead, a         looks at how AI advances are reshaping healthcare [16]
  bot could “drip information” to a new employee over time as            explains, “Once a better understanding of human DNA is
  it becomes more relevant.                                              established, there is an opportunity to go one step further
     On Inc. [8], “Businesses Beyond Bias: How AI Will Re-               and provide personalized insights to individuals based on
  shape Hiring Practices [9]” looks at how SAP SuccessFac-               their idiosyncratic biological dispositions. This trend is in-
  tors [10], a talent management solutions provider, leverages           dicative of a new era of ‘personalized genetics,’ where-
  AI as a job description “bias checker” and to check for bias in        by individuals are able to take full control of their health
  employee compensation.                                                 through access to unprecedented information about their
     Deloitte’s 2017 Human Capital Trends Report [11] indi-              own bodies.”
  cates that AI is motivating organizations to restructure. Fast            The article goes on to explain that AI and machine learn-
  Company’s article “How AI is changing the way companies                ing are lowering the cost and time to discover new drugs.
  are organized [12]” examines the report, which was based               Thanks in part to extensive testing, new drugs can take
  on surveys with more than 10,000 HR and business leaders               more than 12 years to enter the market. “ML algorithms
  around the world. “Instead of hiring the most qualified person         can allow computers to ‘learn’ how to make predictions
  for a specific task, many companies are now putting greater            based on the data they have pre-
  emphasis on cultural fit and adaptability, knowing that indi-          viously processed or choose (and
  vidual roles will have to evolve along with the implementation         in some cases, even conduct)                  AI is helping
  of AI,” the article explains. To adapt to changing technolo-           what experiments need to be
  gies, organizations are also moving away from top-down                 done. Similar types of algorithms         with discovering,
  structures and to multidisciplinary teams, the article says.           also can be used to predict the            diagnosing, and
                                                                         side effects of specific chemical
  3. AI is transforming education                                        compounds on humans, speeding               managing new
  Education budgets are shrinking, whereas classroom siz-                up approvals,” the article says.
  es are growing, so leveraging technological advancements               In 2015, the article notes, a San
                                                                                                                           diseases.
  can help improve the productivity and efficiency of the edu-           Francisco-based startup, Atomwise [17], completed analy-
  cation system, and play a role in improving the quality and            sis on two new drugs to reduce Ebola infectivity within one
                                       affordability of education,       day, instead of taking years.
                                       according to an article              Another startup, London-based BenevolentAI [18], is har-
                                       on VentureBeat. “How              nessing AI to look for patterns in scientific literature. “Recent-
AI will benefit all the                AI will transform educa-          ly, the company identified two potential chemical compounds
stakeholders of the                    tion in 2017 [13]” says
                                       that this year we’ll see AI
                                                                         that may work on Alzheimer s, attracting the attention of
                                                                         pharmaceutical companies,” the article says.
education ecosystem.                   grading students’ written            In addition to drug discovery, AI is helping with discov-
                                       answers, bots answering           ering, diagnosing, and managing new diseases. The Tech-
  students’ questions, virtual personal assistants tutoring stu-         Crunch article explains that, historically, illnesses are diag-
  dents, and more. “AI will benefit all the stakeholders of the          nosed based on symptoms displayed, but AI is being used
  education ecosystem,” the article explains. “Students would            to detect disease signatures in the blood, and to develop
  be able to learn better with instant feedback and guidance,            treatment plans using deep learning insights from analyzing
  teachers would get rich learning analytics and insights to             billions of clinical cases. “IBM’s Watson is working with Me-
  personalize instruction, parents would see improved career             morial Sloan Kettering in New York to digest reams of data
  prospects for their children at a reduced cost, schools would          on cancer patients and treatments used over decades to
  be able to scale high-quality education, and governments               present and suggest treatment options to doctors in dealing
  would be able to provide affordable education to all.”                 with unique cancer cases,” the article says.

  4. AI is reshaping healthcare                                          5. AI is changing our love lives
  A February 2017 article on CB Insights [14] rounded up 106             More than 50-million active users across 195 countries
  artificial intelligence startups in healthcare [15], and many of       swipe through potential mates with Tinder [19], a dating



  52                                                                  Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
   app launched in 2012. In a Forbes Interview podcast [20],            [2]	 https://twitter.com/venturebeat
   Tinder founder and chairman Sean Rad spoke with Ste-                 [3]	 http://venturebeat.com/2017/03/16/how-ai-will-help-us-
   ven Bertoni about how artificial intelligence is changing                  decipher-millennials/
   the dating game. In an article [21] about the interview,             [4] https://twitter.com/raconteur
   Bertoni quotes Rad, who says, “There might be a moment               [5]	 https://www.raconteur.net/technology/how-ai-will-change-
                                     when Tinder is just so good              buyer-behaviour
                                     at predicting the few peo-         [6]	 http://www.fluid.ai/
                                     ple that you’re interested in,     [7] https://www.fastcompany.com/3066620/this-is-how-ai-will-
Future generations                   and Tinder might do a lot of             change-your-work-in-2017
literally might fall                 the leg work in organizing         [8]	 https://twitter.com/Inc
                                     a date, right?” So instead         [9]	 http://www.inc.com/bill-carmody/businesses-beyond-bias-
in love with artificial              of presenting users with                 how-ai-will-reshape-hiring-practices.html
intelligence.                        potential partners, the app        [10]	https://www.successfactors.com/en_us.html
                                     would make a suggestion            [11]	https://dupress.deloitte.com/dup-us-en/focus/human-capital-
   for a nearby partner and take it a step further, coordinate                trends.html?id=us:2el:3pr:dup3575:awa:cons:022817:hct17
   schedules, and set up a date.                                        [12] https://www.fastcompany.com/3068492/how-ai-is-
      Are you in love with AI yet? Future generations literal-                changing-the-way-companies-are-organized
   ly might fall in love with artificial intelligence. An article by    [13]	http://venturebeat.com/2017/02/04/how-ai-will-transform-
   Raya Bidshahri on Singularity Hub [22], “How AI will rede-                 education-in-2017/
   fine love [23],” says that in a few decades we might be ar-          [14]	https://twitter.com/CBinsights
   guing that love is not limited by biology.                           [15]	https://www.cbinsights.com/blog/artificial-intelligence-
      ”Our technology, powered by Moore’s law, is growing at a                startups-healthcare/
   staggering rate—intelligent devices are becoming more and            [16]	https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-
   more integrated to our lives,” Bidshahri explains, adding,                 are-reshaping-healthcare/
   “Futurist Ray Kurzweil predicts that we will have AI at a hu-        [17]	http://www.atomwise.com/
   man level by 2029, and it will be a billion times more capable       [18]	https://twitter.com/benevolent_ai
   than humans by the 2040s. Many predict that one day we               [19]	https://twitter.com/Tinder
   will merge with powerful machines, and we ourselves may              [20] https://www.forbes.com/podcasts/the-forbes-
   become artificially intelligent.” She argues that it’s inevitable          interview/#5e962e5624e1
   in such a world that humans would accept being in love with          [21]	https://www.forbes.com/sites/stevenbertoni/2017/02/14/
   entirely non-biological beings.                                            tinders-sean-rad-on-how-technology-and-artificial-
      That might sound a bit freaky, but falling in love with AI is           intelligence-will-change-dating/#4180fc2e5b99
   a more optimistic outcome than a future in which robots take         [22]	https://twitter.com/singularityhub
   over the world. “Programming AI to have the capacity to feel         [23]	https://singularityhub.com/2016/08/05/how-ai-will-redefine-
   love can allow us to create more compassionate AI and may                  love/
   be the very key to avoiding the AI apocalypse many fear,”
   Bidshahri says.
      This list of big ways AI is invading all areas of our lives       Author
   barely scrapes the surface of the artificial intelligence bub-       Rikki Endsley is a community manager for Opensource.com.
   bling up around us. Which AI innovations are most excit-             In the past, she worked as the community evangelist on the
   ing—or troubling—to you? Let us know about them in the               Open Source and Standards (OSAS) team at Red Hat a
   comments.                                                            freelance tech journalist; community manager for the USE-
                                                                        NIX Association; associate publisher of Linux Pro Magazine,
   Links                                                                ADMIN, and Ubuntu User; and as the managing editor of
   [1]    ttps://www.linux.com/news/open-source-projects-are-
         h                                                              Sys Admin magazine and UnixReview.com. Follow her on
         transforming-machine-learning-and-ai                           Twitter at: @rikkiends.




   Open Source Yearbook 2017      . CC BY-SA 4.0 . O    pensource.com                                                                 53
LEARNING
                   ..............   ..
                          .............


  Getting started with
  .NET for Linux
                                                       BY DON SCHENCK


  Microsoft’s decision to make .NET Core open source means it’s time for
  Linux developers to get comfortable and start experimenting.




  WHEN YOU KNOW                            a software develop-
                                           er’s preferred op-
  erating system, you can often guess what programming
                                                                         Given this situation, it’s about time Linux developers get
                                                                       comfortable with .NET Core and start experimenting, per-
                                                                       haps even building production applications. Pretty soon
  language(s) they use. If they use Windows, the language              you’ll meet that person: “I use Linux I write C# apps.” Brace
  list includes C#, JavaScript, and TypeScript. A few legacy           yourself: .NET is coming.
  devs may be using Visual Basic, and the bleeding-edge
  coders are dabbling in F#. Even though you can use Win-              How to install .NET Core on Linux
  dows to develop in just about any language, most stick               The list of Linux distributions on which you can run .NET
  with the usuals.                                                     Core includes Red Hat Enterprise Linux (RHEL), Ubuntu,
     If they use Linux, you get a list of open source projects:        Debian, Fedora, CentOS, Oracle, and SUSE.
  Go, Python, Ruby, Rails, Grails, Node.js, Haskell, Elixir, etc.        Each distribution has its own installation instructions [1].
  It seems that as each new language—Kotlin, anyone?—is                For example, consider Fedora 26:
  introduced, Linux picks up a new set of developers.                    Step 1: Add the dotnet product feed.
     So leave it to Microsoft (Microsoft?!?) to throw a wrench
  into this theory by making the .NET framework, coined                sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
  .NET Core, open source and available to run on any plat-             su
                                                                         do sh -c 'echo -e "[packages-microsoft-com-prod]\nname=packages-
  form. Windows, Linux, MacOS, and even a television OS:                 microsoft-com-prod \nbaseurl=https://packages.microsoft.com/
  Samsung’s Tizen. Add in Microsoft’s other .NET flavors,                yumrepos/microsoft-rhel7.3-prod\nenabled=1\ngpgcheck=1\
  including Xamarin, and you can add the iOS and Android                 ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" >
  operating systems to the list. (Seriously? I can write a Visu-         /etc/yum.repos.d/dotnetdev.repo'
  al Basic app to run on my TV? What strangeness is this?)
                                                                       Step 2: Install the .NET Core SDK.



                                                                       sudo dnf update
                                                                       sudo dnf install libunwind libicu compat-openssl10
                                                                       sudo dnf install dotnet-sdk-2.0.0


                                                                       Creating the Hello World console app
                                                                       Now that you have .NET Core installed, you can create the
                                                                       ubiquitous “Hello World” console application before learning
                                                                       more about .NET Core. After all, you’re a developer: You
                                                                       want to create and run some code now. Fair enough; this is



  54                                                                Open Source Yearbook 2017      . CC BY-SA 4.0 . O    pensource.com
easy. Create a directory, move into it, create the code, and          That’s your executable.
run it:                                                                 Sort of.
                                                                        When you create a dotnet application, you’re creating
mkdir helloworld & & cd helloworld                                    an assembly … a library … yes, you’re creating a DLL. If
dotnet new console                                                    you want to see what is created by the dotnet build com-
dotnet run                                                            mand, take a peek at bin/Debug/netcoreapp2.0/. You’ll
                                                                      see helloworld.dll, some JSON configuration files, and a
You ll see the following output:                                      helloworld.pdb (debug database) file. You can look at the
                                                                      JSON files to get some idea as to what they do (you already
$ dotnet run                                                          did … I know … because you’re a developer).
Hello World!                                                            When you run dotnet run, the process that runs is dotnet.
                                                                      That process, in turn, invokes your DLL file and it becomes
What just happened?                                                   your application.
Let’s take what just happened and break it down. We know
what the mkdir and cd did, but after that?                            It’s portable
                                                                      Here’s where .NET Core really starts to depart from the Win-
dotnew new console                                                    dows-only .NET Framework: The DLL you just created will
As you no doubt have guessed, this created the “Hello                 run on any system that has .NET Core installed, whether
World!” console app. The key things to note are: The proj-            it be Linux, Windows, or MacOS. It’s portable. In fact, it is
ect name matches the directory name (i.e., “helloworld”);             literally called a “portable application.”
the code was build using a template (console application);
and the project’s dependencies were automatically re-
trieved by the dotnet restore command, which pulls from               Forever alone
nuget.org [2].                                                        What if you want to distribute an application and don’t want
   If you view the directory, you’ll see these files were created:    to ask the user to install .NET Core on their machine? (Ask-
                                                                      ing that is sort of rude, right?) Again, .NET Core has the an-
Program.cs                                                            swer: the standalone application.
helloworld.csproj                                                        Creating a standalone application means you can dis-
                                                                      tribute the application to any system and it will run, without
Program.cs is the C# console app code. Go ahead and take              the need to have .NET Core installed. This means a faster
a look inside (you already did ... I know ... because you’re a        and easier installation. It also means you can have mul-
developer), and you’ll see what’s going on. It’s all very simple.     tiple applications running different versions of .NET Core
                                                                      on the same system. It also seems like it would be useful
Helloworld.csproj is the MSBuild-compatible project file.             for, say, running a microservice inside a Linux container.
In this case there’s not much to it. When you create a web            Hmmm…
service or website, the project file will take on a new level of
significance.                                                         What’s the catch?
                                                                      Okay, there is a catch. For now. When you create a stand-
dotnet run                                                            alone application using the dotnet publish command, your
This command did two things: It built the code, and it ran            DLL is placed into the target directory along with all of the
the newly built code. Whenever you invoke dotnet run, it              .NET bits necessary to run your DLL. That is, you may see
will check to see if the *.csproj file has been altered and           50 files in the directory. This is going to change soon. An
will run the dotnet restore command. It will also check to            already-running-in-the-lab initiative, .NET Native, will soon
see if any source code has been altered and will, behind              be introduced with a future release of .NET Core. This will
the scenes, run the dotnet build command which—you                    build one executable with all the bits included. It’s just like
guessed it—builds the executable. Finally, it will run the            when you are compiling in the Go language, where you
executable.                                                           specify the target platform and you get one executable;
  Sort of.                                                            .NET will do that as well.
                                                                         You do need to build once for each target, which only
Where is my executable?                                               makes sense. You simply include a runtime identifier [3] and
Oh, it’s right there. Just run which dotnet and you’ll see (on        build the code, like this example, which builds the release
RHEL):                                                                version for RHEL 7.x on a 64-bit processor:

/opt/rh/rh-dotnet20/root/usr/bin/dotnet                               dotnet publish -c Release -r rhel.7-x64




Open Source Yearbook 2017       . CC BY-SA 4.0 . O    pensource.com                                                               55
LEARNING
                    ..............   ..
                           .............
  Web services, websites, and more                                     Links
  So much more is included with the .NET Core templates,               [1] https://www.microsoft.com/net/core
  including support for F# and Visual Basic. To get a starting         [2]	https://www.nuget.org/
  list of available templates that are built into .NET Core, use       [3]	https://docs.microsoft.com/en-us/dotnet/core/rid-catalog
  the command dotnet new --help.                                       [4] https://github.com/dotnet/templating/wiki/Available-
     Hint: .NET Core templates can be created by third par-                 templates-for-dotnet-new
  ties. To get an idea of some of these third-party templates,         [5]	https://developers.redhat.com/topics/dotnet/
  check out these templates [4], then let your mind start to           [6]	https://www.microsoft.com/net
  wander…                                                              [7]	https://live.asp.net/
     Like most command-line utilities, contextual help is al-          [8]	https://developers.redhat.com/promotions/dot-net-core/
  ways at hand by using the --help command switch. Now
  that you’ve been introduced to .NET Core on Linux, the help          Author
  function and a good web search engine are all you need to            A developer who has seen it all, Don is a Microsoft MVP
  get rolling.                                                         and currently a Director of Developer Experience at Red Hat,
                                                                       with a focus on Microsoft .NET on Linux. His mission is to
  Other resources                                                      connect .NET developers with the Linux and open source
  Ready to learn more about .NET Core on Linux? Check out              communities. Prior to Red Hat, Don was a Developer Advo-
  these resources:                                                     cate at Rackspace where he was immersed in cloud technol-
                                                                       ogy. He still enjoys cooking and studying human behavior,
  •   R
       edhatloves.net [5]                                             and still hates the designated hitter rule.
  •   D
       ot.net [6]                                                       Don’s overarching belief is this: “A program is not a com-
  •   L
       ive.asp.net [7]                                                munication between a developer and a machine; it’s a com-
  •   T
       ransitioning to .NET Core on Red Hat Enterprise Linux [8]      munication between a developer and the next developer.”




            Transparency. Adaptability. Inclusivity.
                 Community. Collaboration.




                                                                        It's open. For business.
                                                                        opensource.com/open-organization


  56                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
                                                                                       ....................     LEARNING




Why Go is skyrocketing
in popularity
                                          BY JEFF ROUSE

In only two years, Golang leaped from the 65th most popular programming language to #17.
Here’s what’s behind its rapid growth.




THE GO            PROGRAMMING LANGUAGE, [1] sometimes referred
                  to as Google’s golang, is making strong
gains in popularity. While languages such as Java and C con-
                                                                    has been added, and products like Pachyderm [4] (next-
                                                                    gen data processing, versioning, and storage) are being
                                                                    built using Go. Heroku’s Force.com [5] and parts of Cloud
tinue to dominate programming, new models have emerged              Foundry [6] were also written in Go. More names are being
that are better suited to modern computing, particularly in         added regularly.
the cloud. Go’s increasing use is due, in part, to the fact that
it is a lightweight, open source language suited for today’s        Rising popularity and usage
microservices architectures. Container darling Docker and           In the September 2017 TIOBE Index for Go, you can clearly
Google’s container orchestration product Kubernetes [2] are         see the incredible jump in Go popularity since 2016, includ-
built using Go. Go is also gaining ground in data science,          ing being named TIOBE’s Programming Language Hall of
with strengths that data scientists are looking for in overall      Fame winner for 2016, as the programming language with
performance and the ability to go from “the analyst’s laptop        the highest rise in ratings in a year. In November 2017 it
to full production.”                                                stood at #17 on the monthly list, up from #19 in 2016, and
    As an engineered language (rather than something that           up from #65 in 2015.
evolved over time), Go benefits developers in multiple ways,
                                                                    TIOBE Index for Go, TIOBE [7].
including garbage collection, native concurrency, and many
other native capabilities that reduce the need for developers
to write code to handle memory leaks or networked apps. Go
also provides many other features that fit well with microser-
vices architectures and data science.
    Because of this, Go is being adopted by interesting com-
panies and projects. Recently an API for Tensorflow [3]




Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                            57
LEARNING
                   ..............   ..
                          .............
     The Stack Overflow Survey 2017 also shows signs of               • F irst, many programmers have come to expect that mem-
  Go’s rise in popularity. Stack Overflow’s comprehensive                ory management will be done for them.
  survey of 64,000 developers tries to get at developers’             • Second, memory management requires different routines
  preferences by asking about the “Most Loved, Dreaded,                  for different processing cores. Manually attempting to ac-
  and Wanted Languages.” This list is dominated by newer                 count for each configuration can significantly increase the
  languages like Mozilla’s Rust, Smalltalk, Typescript, Ap-              risk of introducing memory leaks.
  ple’s Swift, and Google’s Go. But for the third year in a
  row Rust, Swift, and Go made the top five “most loved”              Go’s native concurrency is a boon for network applications
  programming languages.                                              that live and die on concurrency. From APIs to web servers
                                                                      to web app frameworks, Go projects tend to focus on net-
  Most Loved, Dreaded, and Wanted Languages,                          working, distributed functions, and/or services for which Go’s
  ”Stackoverflow.com [8].                                             goroutines and channels are well suited.

                                                                      Suited for data science
                                                                      Extracting business value from large datasets is quickly be-
                                                                      coming a competitive advantage for companies and a very
                                                                      active area in programming, encompassing specialties like
                                                                      artificial intelligence, machine learning, and more. Go has
                                                                      multiple strengths in these areas of data science, which is
                                                                      increasing its use and popularity.

                                                                      • S uperior error handling and easier debugging are helping
                                                                         it gain popularity over Python and R, the two most com-
                                                                         monly used data science languages.
                                                                      • Data scientists are typically not programmers. Go helps
                                                                         with both prototyping and production, so it ends up being
                                                                         a more robust language for putting data science solutions
                                                                         into production.
                                                                      • P erformance is fantastic, which is critical given the
  Go advantages                                                          explosion in big data and the rise of GPU databases.
  Some programming languages were hacked together over                   Go does not have to call in C/C++ based optimiza-
  time, whereas others were created academically. Still oth-             tions for performance gains, but gives you the ability
  ers were designed in a different age of computing with dif-            to do so.
  ferent problems, hardware, and needs. Go is an explicitly
  engineered language intended to solve problems with ex-             Seeds of Go’s expansion
  isting languages and tools while natively taking advantage          Software delivery and deployment have changed dramat-
  of modern hardware architectures. It has been designed not          ically. Microservices architectures have become key to
  only with teams of developers in mind, but also long-term           unlocking application agility. Modern apps are designed to
  maintainability.                                                    be cloud-native and to take advantage of loosely coupled
     At its core, Go is pragmatic. In the real world of IT, com-      cloud services offered by cloud platforms.
  plex, large-scale software is written by large teams of               Go is an explicitly engineered programming language,
  developers. These developers typically have varying skill           specifically designed with these new requirements in mind.
  levels, from juniors up to seniors. Go is easy to become            Written expressly for the cloud, Go has been growing in pop-
  functional with and appropriate for junior developers to            ularity because of its mastery of concurrent operations and
  work on.                                                            the beauty of its construction.
     Also, having a language that encourages readability and            Not only is Google supporting Go, but other compa-
  comprehension is extremely useful. The mixture of duck              nies are aiding in market expansion as well. For example,
  typing (via interfaces) and convenience features such as            Go code is supported and expanded with enterprise-lev-
  “:=” for short variable declarations give Go the feel of a          el distributions such as ActiveState’s ActiveGo [9]. As
  dynamically typed language while retaining the positives of         an open source movement, the golang.org [10] site and
  a strongly typed one.                                               annual GopherCon [11] conferences form the basis of a
     Go’s native garbage collection removes the need for de-          strong, modern open source community that allows new
  velopers to do their own memory management, which helps             ideas and new energy to be brought into Go’s develop-
  negate two common issues:                                           ment process.



  58                                                               Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
  Links                                                               [11] https://www.gophercon.com/
  [1]	 https://golang.org/                                           [12]	https://activestate.com
  [2]	 https://opensource.com/sitewide-search?search_api_
        views_fulltext=Kubernetes
  [3]	 https://www.tensorflow.org/                                   Author
  [4]	 http://www.pachyderm.io/                                      Director of Product Management at ActiveState [12]. Jeff
  [5]	 https://github.com/heroku/force                               is passionate about innovation, entrepreneurship, open
  [6]	 https://www.cloudfoundry.org/                                 source, and building great products. With over 20 years of
  [7]	 https://www.tiobe.com/tiobe-index/go/TIOBE                    experience from startups to publicly traded technology firms,
  [8] https://insights.stackoverflow.com/survey/2017#most-           Jeff has led development teams and business operations,
        loved-dreaded-and-wanted                                      created innovative products, and taken them to market. At
  [9]	https://www.activestate.com/activego                           ActiveState, Jeff oversees product strategy and innovation
  [10]	https://golang.org/                                           across all product lines.




W R I T E        F O R        U S
                                         ..............   ..
                                                .............
  Would you like to write for Opensource.com? Our editorial calendar includes upcoming themes, community columns, and
  topic suggestions: https://opensource.com/calendar

  Learn more about writing for Opensource.com at: https://opensource.com/writers

  We're always looking for open source-related articles on the following topics:

          Big data: Open source big data tools, stories, communities, and news.
          Command-line tips: Tricks and tips for the Linux command-line.
          Containers and Kubernetes: Getting started with containers, best practices, security, news,
          projects, and case studies.

          Education: Open source projects, tools, solutions, and resources for educators, students, and the
          classroom.

          Geek culture: Open source-related geek culture stories.
          Hardware: Open source hardware projects, maker culture, new products, howtos, and tutorials.
          Machine learning and AI: Open source tools, programs, projects and howtos for machine learning
          and artificial intelligence.

          Programming: Share your favorite scripts, tips for getting started, tricks for developers, tutorials, and
          tell us about your favorite programming languages and communities.

          Security: Tips and tricks for securing your systems, best practices, checklists, tutorials and tools, case
          studies, and security-related project updates.



  Open Source Yearbook 2017       . CC BY-SA 4.0 . O  pensource.com                                                             59
LEARNING
                      ..............   ..
                             .............

  Introduction to the
  Domain Name System
  (DNS)                                                                                        BY DAVID BOTH

  Learn how the global DNS system makes it possible for us to assign memorable names to the
  worldwide network of machines we connect to every day.


  SURFING THE WEB                          is fun and easy, but
                                           think what it would
  be like if you had to type in the IP address of every website
                                                                     192.168.25.22
                                                                     192.168.25.23
                                                                     192.168.25.24
                                                                                       host2
                                                                                       host3
                                                                                       host4
  you wanted to view. For example, locating a website would
  look like this when you type it in: https://54.204.39.132,         In small networks, the /etc/hosts file on each host can be
  which would be nearly impossible for most of us to remem-          used as a name resolver. Maintaining copies of this file on
  ber. Of course, using bookmarks would help, but suppose            several hosts can become very time-consuming and errors
  your friend tells you about a cool new website and tells you       can cause much confusion and wasted time before they are
  to go to 54.204.39.132. How would you remember that? Tell-         found. I did this for several years on my own network and it
  ing someone to go to “Opensource.com” is far easier to re-         eventually became too much trouble to maintain even with
  member. And, yes, that is our IP address.                          only the usual eight to 12 computers I usually have opera-
     The Domain Name System provides the database to                 tional. Ultimately, I converted to running my own name serv-
  be used in the translation from human-readable hostnam-            er to resolve both internal and external hostnames.
  es, such as www.opensource.com, to IP addresses, like                 Most networks of any size require centralized management of
  54.204.39.132, so that your internet-connected computers           this service with name services software such as BIND. Hosts
  and other devices can access them. The primary function            use the Domain Name System (DNS) to locate IP addresses
  of the BIND [1] (Berkeley Internet Name Domain) software           from the names given in software such as web browsers, email
  is that of a domain name resolver that uses that database.         clients, SSH, FTP, and many other internet services.
  There is other name resolver software, but BIND is currently
  the most widely used DNS software on the internet. I will          How a name search works
  use the terms name server, DNS, and resolver pretty much           Let’s take look at a simplified example of what happens when
  interchangeably throughout this article.                           a name request for a web page is made by a client service on
     Without these name resolver services, surfing the web as        your computer. For this example, I will use www.opensource.
  freely and easily as we do would be nearly impossible. As          com [2] as the website I want to view in my browser. I also
  humans, we tend to do better with names like Opensource.           assume that there is a local name server on the network, as
  com, while computers do much better with numbers like              is the case with my own network.
  54.204.39.132. Therefore, we need a translation service to         1. First, I type in the URL or select a bookmark containing
  convert the names that are easy for us to the numbers that             that URL. In this case, the URL is www.opensource.com.
  are easy for our computers.                                        2. The browser client, whether it is Opera, Firefox, Chrome,
                                                                         Lynx, Links, or any other browser, sends the request to the
  127.0.0.1     
                localhost localhost.localdomain localhost4               operating system.
                 localhost4.localdomain4                             3. The operating system first checks the /etc/hosts file to
  ::1           
                localhost localhost.localdomain localhost6               see if the URL or hostname is there. If so the IP address
                 localhost6.localdomain6                                 of that entry is returned to the browser. If not, I proceed to
                                                                         the next step. In this case, I assume that it is not.
  # Lab hosts                                                        4. The URL is then sent to the first name server specified
  192.168.25.1        server                                             in /etc/resolv.conf. In this case, the IP address of the
  192.168.25.21       host1                                              first name server is my own internal name server. For this



  60                                                              Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
    example, my name server does not have the IP address           2. The second level domain name is always immediately to
    for www.opensource.com cached and must look further                the left of the top-level domain when specifying a host-
    afield. Let’s go on to the next step.                              name or URL, so names like Redhat.com, Opensource.
5. The local name server sends the request to a remote                com, Getfedora.org, and example.com provide the orga-
    name server. This can be one of two destination types,             nizational address portion of the FQDN.
    one type of which is a forwarder. A forwarder is simply an-    3. The third level of the FQDN is the hostname portion of the
    other name server, such as the ones at your ISP, or a pub-         name, so the FQDN of a specific host in a network would
    lic name server, such as Google at 8.8.8.8 or 8.8.4.4. The         be something like host1.example.com.
    other destination type is that of the top-level root name
                                                                   Figure 1 shows a simplified diagram of the DNS database
    servers [3]. The root servers don’t usually respond with
                                                                   hierarchy. The “top” level, which is represented by a single dot (.)
    the desired target IP address or www.opensource.com,
                                                                   has no real physical existence. It is a device for use in DNS
    they respond with the authoritative name server for that       zone file configuration to enable an explicit end stop for domain
    domain. The authoritative name servers are the only ones       names. A bit more on this later.
    that have the authority to maintain and modify name data
    for a domain.
    The local name server is configured to use the root
   name servers so the root name server for the .com
   top-level domain returns the IP Address of the authori-
   tative name server [4] for www.opensource.com. That IP
   address could be for any one of the three (at the time of
   this writing) name servers, ns1.redhat.com, ns2.redhat.
   com, or ns3.redhat.com.
6. The local name server then sends the query to the author-
    itative name server, which returns the IP address for www.
    opensource.com.
7. The browser uses the IP address for www.opensource.
    com to send a request for a web page, which is download-       The true top level consists of the root name servers. These
    ed to my browser.                                              are a limited number of servers that maintain the top-level
                                                                   DNS databases. The root level may contain the IP addresses
One of the important side effects of this name search is that      for some domains, and the root servers will directly provide
the results are cached for a period of time by my local name       those IP addresses where they are available. In other cases,
server. That means that the next time I, or anyone on my           the root servers provide the IP addresses of the authoritative
network, wants to access Opensource.com, the IP Address            server for the desired domain.
is probably already stored locally, which prevents doing a            For example, assume I want to browse www.open-
remote lookup.                                                     source.com. My browser makes the request of the local
                                                                   name server, which does not contain that IP address. My
The DNS database                                                   local name server is configured to use the root servers
The DNS system is dependent upon its database to perform           when an address is not found in the local cache, so it
lookups on hostnames to locate the correct IP address. The         sends the request for www.opensource.com to one of the
DNS database is a general-purpose distributed, hierarchical,       root servers. Of course, the local name server must know
replicated database. It also defines the style of hostname         how to locate the root name servers so it uses the /var/
used on the internet, properly called a FQDN (Fully Qualified      named/named.ca file, which contains the names and IP
Domain Name).                                                      addresses of the root name servers. The named.ca file is
   FQDNs consist of complete hostnames such as hornet.             also known as the hints file.
example.com and test1.example.com. FQDNs break down                   In this example, the IP address for www.opensource.com
into three parts.                                                  is not stored by the root servers. The root server uses its da-
                                                                   tabase to locate the name and IP address of the authoritative
1. The TLDN (Top-Level Domain Names [5]), such as .com,           name server for www.opensource.com.
    .net, .biz, .org, .info, .edu, and so on, provide the last        The local name server queries the authoritative name
    segment of a FQDN. All TLDNs are managed on the root           server, which returns the IP address of www.opensource.
    name servers. Aside from country top-level domains such        com. The local name server then responds to the browser’s
    as .us, .uk, and so on, there were originally only a few       request and provides it with the IP address. The authoritative
    main top-level domains. As of February 2017, there are         name server for Opensource.com contains the zone files for
    1528 top-level domains.                                        that domain.



Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                    61
LEARNING
                       ..............   ..
                              .............
  # dig www.opensource.com                                                  records and there must be an A record, which contains the IP
                                                                            address for each host. CNAME stands for Canonical Name
  ; <<>> DiG 9.10.4-P6-RedHat-9.10.4-4.P6.fc25 <<>> www.opensource.com      and this record type is an alias for the A record and points
  ;; global options: +cmd                                                   to it. It is not typical practice to use “www” as the hostname
  ;; Got answer:                                                            for a web server. It is common to see a CNAME record that
  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54308                 points to the A record of the FQDN; however, that is not quite
  ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 3, ADDITIONAL: 4      the case here. Notice that the A record for Opensource.com
                                                                            does not have a hostname associated with it. It is possible
                                                                            to have a record that applies to the domain as a whole as is
  ;; OPT PSEUDOSECTION:                                                     the case here.
  ; EDNS: version: 0, flags:; udp: 4096                                        The authority section lists the authoritative name servers
  ;; QUESTION SECTION:                                                      for the Opensource.com domain. In this case, those are the
  ;www.opensource.com.             IN     A                                 Red Hat name servers. Notice that the record type is NS for
                                                                            these entries.
                                                                               The additional section lists the A records for the Red Hat
  ;; ANSWER SECTION:                                                        name servers.
  www.opensource.com.       300    IN     CNAME   opensource.com.              Following the additional section, I can find some additional
  opensource.com.           300    IN     A       54.204.39.132             interesting information, including the IP address of the serv-
                                                                            er that returned the information shown in the results. In this
                                                                            case, it was my own internal name server.
  ;; AUTHORITY SECTION:
  opensource.com.           129903 IN     NS      ns1.redhat.com.           DNS Client configuration
  opensource.com.           129903 IN     NS      ns3.redhat.com.           Most computers need little configuration to enable them to
  opensource.com.           129903 IN     NS      ns2.redhat.com.           access name services. It usually consists of adding the IP
                                                                            addresses of one to three name servers to the /etc/resolv.
                                                                            conf file. This is typically performed at boot time on most
  ;; ADDITIONAL SECTION:                                                    home and laptop computers because they are configured
  ns2.redhat.com.           125948 IN     A       209.132.183.2             using the DHCP (Dynamic Host Configuration Protocol),
  ns3.redhat.com.           125948 IN     A       66.187.233.212            which provides them with their IP address, gateway address,
  ns1.redhat.com.           125948 IN     A       209.132.186.218           and the IP addresses of the name servers.
                                                                               For hosts that are configured statically, the resolv.conf file
                                                                            is usually generated during installation from information en-
  ;; Query time: 71 msec                                                    tered by the sysadmin doing the install. In current Red Hat-
  ;; SERVER: 192.168.0.51#53(192.168.0.51)                                  based distributions and others that use NetworkManager to
  ;; WHEN: Sat Mar 04 21:23:51 EST 2017                                     manage network configuration and to perform connection
  ;; MSG SIZE rcvd: 186                                                     management, the static information, such as name servers,
                                                                            gateway, and IP address, are all stored in the interface con-
     Listing 2, above, shows the results of a dig command dis-              figuration files located in /etc/sysconfig/network-scripts.
  playing not only the IP address of the desired host, but it also             Overriding the defaults provided to a host by adding that
  shows the authoritative servers, their IP addresses, and the              information to the interface configuration file is possible. For
  server that actually fulfilled the request. Here is the use of the        example, I sometimes add my preferred name servers to
  dig command that obtains the DNS information for www.open-                the interface configuration files on my laptop and netbook.
  source.com. The dig command is a powerful tool that can tell              Many of the name servers provided by remote connections
  us a lot of information about the DNS configuration for a host.           in public places, such as hotels, coffee shops, and even
  The dig command returns the actual records from the DNS da-               friends’ personal Wi-Fi connections, can be unreliable and
  tabase and displays the results in four main sections. Refer to           in some cases can use forwarders that intentionally censor
  Listing 2 as you read the descriptions of these sections.                 results or redirect queries to pages of advertisements, so I
     The first section is the question section. For this exam-              always insert the Google public name servers in my inter-
  ple, it states that I am looking for the A record of “www.                face configuration files. Refer to my article How to configure
  opensource.com.” Notice the dot at the end of the top-level               networking in Linux [6] for information about the interface
  domain name. This indicates that .com is the final domain                 configuration files.
  name component in the hostname.                                              Also, be aware that NetworkManager creates an interface
     The answer section shows two entries, a CNAME record                   configuration file for each Wi-Fi network it connects with.
  and an A record. A records are the primary name resolver                  The configuration files are named for the SSID (Service Set



  62                                                                     Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
IDentifier) of the network. Be sure to add the desired name               The second line is a serial number. In this example,
server entries to the correct file.                                    I use the date in YYYYMMDDXX format where XX is a
  Some of the interface configuration files that have been             counter. Thus, the serial number in the SOA record above
created on my laptop by NetworkManager in the last few                 represents the first version of this file on March 13, 2017.
months are listed below.                                               This format ensures that all changes to the serial num-
                                                                       ber are incremented in a numerically sequential manner.
• ifcfg-enp0s25 (This is the configuration file for                   Doing this is important because secondary name servers,
   the wired network.)                                                 also known as slave servers, only replicate from the pri-
• ifcfg-FBI-DHS.TF1_EXT                                                mary server when the serial number of the zone file on
• ifcfg-HOME-14A2                                                      the primary is greater than the serial number on the sec-
• ifcfg-linksys                                                        ondary. Be sure to increment the serial number when you
• ifcfg-LinuxDude                                                      make changes or the secondary server will not sync up
• ifcfg-MomsPlace                                                      with the modified data.
• ifcfg-FBI-van                                                           The rest of the SOA record consists of various times that
• ifcfg-PointSourceGuest                                               secondary servers should perform a refresh from the primary
• ifcfg-Red_Hat_Guest                                                  and wait for retries if the first refresh fails. It also defines the
• ifcfg-Sands_3_hoa1                                                   amount of time before the zone’s authoritative status expires.
• ifcfg-Sheraton_Raleigh_Guest_Access                                     Times all used to be specified in seconds, but recent ver-
• ifcfg-SM-CLC1                                                        sions of BIND allow other options defined with W=week,
• ifcfg-xfinityWi-Fi                                                   D=day, H=hour, and M=minute. Seconds are assumed if no
                                                                       other specifier is used.
And no, I do not have any connection with the FBI. Someone
I know who shall remain nameless has an interesting sense              $ORIGIN
of humor and enjoys making the neighbors nervous.                      The $ORIGIN record is like a variable assignment. The val-
                                                                       ue of this variable is appended by the BIND program to any
DNS record types                                                       name in an A or PTR record that does not end in a period (.)
There are a number of different DNS record types, and I                in order to create the FQDN (Fully Qualified Domain Name)
want to introduce some of the more common ones here. My                for that host. This makes for less typing because the zone
next article will describe how to create your own name server          administrator only has to type the host name portion and not
using BIND and will use many of these record types to build            the FQDN for each record.
your name server. These record types are used in the zone
files that comprise the DNS database.                                  $ORIGIN          example.com.
    One common field in all of these records is “IN,” which
specifies that these are INternet records.                             Also, the @ symbol is used as a shortcut for this variable
    View a complete list of DNS record types on Wikipedia [7].         and any occurrence of @ in the file is replaced by the value
                                                                       of $ORIGIN.
SOA
SOA is the Start of Authority record. It is the first record in        NS
any forward or reverse zone file, and it identifies this as            The NS record specifies the authoritative name server for
the authoritative source for the domain it describes. It also          the zone. Note that both names in this record end with peri-
specifies certain functional parameters. A typical SOA record          ods so that “.example.com” does not get appended to them.
looks like the sample below.                                           This record will usually point to the local host by its FQDN.

@   IN SOA   epc.example.com   root.epc.example.com. (                 example.com.             IN       NS      epc.example.com.


                     2017031301       ; serial                         Note that the host, epc.example.com, must also have an A
                     1D               ; refresh                        record in the zone. The A record can point to the external IP
                     1H               ; retry                          address of the host or to the localhost address, 127.0.0.1.
                     1W               ; expire
                     3H )             ; minimum                        A
                                                                       The A record is the Address record type that specifies the
The first line of the SOA record contains the name of the              relationship between the host name and the IP address as-
server for the zone and the zone administrator, in this                signed to that host. In the example below, the host test1 has
case root.                                                             IP address 192.168.25.21. Note that the value of $ORIGIN is



Open Source Yearbook 2017      . CC BY-SA 4.0 . O      pensource.com                                                                    63
LEARNING
                    ..............   ..
                           .............
  appended to the name test1 because test1 is not an FQDN                  did not respond, delivery would be attempted to the mail
  and does not have a terminating period in this record.                   server with the next highest priority.

  test1                     IN    A        192.168.25.21                   Other records
                                                                           There are other types of records that you may encounter in
  The A record is the most common type of DNS database re-                 the DNS database. One type, the TXT records, are used to re-
  cord.                                                                    cord comments about the zone or hosts in the DNS database.
                                                                           TXT records can also be used for DNS Security. The rest of
  CNAME                                                                    the DNS record types are outside the scope of this article.
  The CNAME record is an alias for the name in the A record
  for a host. For example, the hostname server.example.com                 Final thoughts
  might serve as both the web server and the mail server.                  Name services are a very important part of making the inter-
  There would be one A record for the server, and possibly two             net easily accessible. It binds the myriad disparate hosts con-
  CNAME records as shown below.                                            nected to the internet into a cohesive unit that makes it pos-
                                                                           sible to communicate with the far reaches of the planet with
  server                    IN    A        192.168.25.1                    ease. It has a complex distributed database structure that is
  www                       IN    CNAME    server                          perhaps even unknowable in its totality, yet that can be rapidly
  mail                      IN    CNAME    server                          searched by any connected device to locate the IP address of
                                                                           any other device that has an entry in that database.
  Lookups with the dig command on www.example.com and
  mail.example.com will return the CNAME record for mail or                Resources
  www and the A record for server.example.com.                             •   IANA (Internet Assigned Numbers Authority) [8]
                                                                           •    SOA (Start of Authority) record [9]
  PTR                                                                      •    List of DNS Record Types [10]
  The PTR records are to provide for reverse lookups. This is              •     Common DNS records and their uses [11]
  when you already know the IP address and need to know the
  fully qualified host name. For example, many mail servers do             Links
  a reverse lookup on the alleged IP address of a sending mail             [1]	 https://www.isc.org/downloads/bind/
  server to verify that the name and IP address given in the               [2]	 http://www.opensource.com/
  email headers match. PTR records are used in reverse zone                [3]	 https://en.wikipedia.org/wiki/Root_name_server
  files. Reverse lookups can also be used when attempting to               [4]	 https://en.wikipedia.org/wiki/Name_server#Authoritative_
  determine the source of suspect network packets.                               name_server
      Be aware that not all hosts have PTR records, and many               [5]	 https://en.wikipedia.org/wiki/Top-level_domain
  ISPs create and manage the PTR records, so reverse look-                 [6]	 https://opensource.com/life/16/6/how-configure-
  ups may not provide the needed information.                                    networking-linux
                                                                           [7]	 https://en.wikipedia.org/wiki/List_of_DNS_record_types
  MX                                                                       [8] https://www.iana.org/
  The MX record defines the Mail eXchanger, (i.e., the mail                [9]	 http://www.zytrax.com/books/dns/ch8/soa.html
  server for the domain example.com). Notice that it points to             [10]	https://en.wikipedia.org/wiki/List_of_DNS_record_types
  the CNAME record for the server in the example above. Note               [11]	https://blog.dnsimple.com/2015/04/common-dns-records/
  that both example.com names in the MX record terminate with
  a dot so that example.com is not appended to the names.                  Author
                                                                           David Both is a Linux and Open Source advocate who resides
  ; Mail server MX record                                                  in Raleigh, North Carolina. He has been in the IT industry for
  example.com.              IN   MX       10        mail.example.com.      over forty years and taught OS/2 for IBM where he worked for
                                                                           over 20 years. While at IBM, he wrote the first training course
  Domains may have multiple mail servers defined. The num-                 for the original IBM PC in 1981. He has taught RHCE classes
  ber “10” in the above MX record is a priority value. Other               for Red Hat and has worked at MCI Worldcom, Cisco, and the
  servers may have the same priority or different ones. The                State of North Carolina. He has been working with Linux and
  lower numbers define higher priorities. Therefore, if all                Open Source Software for almost 20 years. David has written
  mail servers have the same priority they would be used in                articles for OS/2 Magazine, Linux Magazine, Linux Journal and
  round-robin fashion. If they have different priorities, the mail         OpenSource.com. His article “Complete Kickstart,” co-authored
  delivery would first be attempted to the mail server with the            with a colleague at Cisco, was ranked 9th in the Linux Magazine
  highest priority—the lowest number—and if that mail server               Top Ten Best System Administration Articles list for 2008.



  64                                                                    Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
                                                                                      ....................       LEARNING




What is the TensorFlow
machine intelligence
platform?                                                                             BY AMY UNRUH



Learn about the Google-developed open source library for machine learning and deep neural
networks research.



TENSORFLOW                     [1] is an open source soft-
                               ware library for numerical
computation using data-flow graphs. It was originally
developed by the Google Brain Team within Google’s
Machine Intelligence research organization for machine
learning and deep neural networks research, but the sys-
tem is general enough to be applicable in a wide variety of
other domains as well. It reached version 1.0 [2] in Feb-
ruary 2017, and has continued rapid development, with
21,000+ commits thus far, many from outside contributors.
This article introduces TensorFlow, its open source com-
munity and ecosystem, and highlights some interesting
TensorFlow open sourced models.
  TensorFlow is cross-platform. It runs on nearly everything:        The TensorFlow distributed execution engine abstracts
GPUs and CPUs—including mobile and embedded plat-                 away the many supported devices and provides a high
forms—and even tensor processing units (TPUs [3]), which          performance-core implemented in C++ for the TensorFlow
are specialized hardware to do tensor math on. They aren’t        platform.
widely available yet, but we have recently launched an alpha         On top of that sit the Python and C++ frontends (with more
program [4].                                                      to come). The Layers API [5] provides a simpler interface for
                                                                  commonly used layers in deep learning models. On top of
Image by Google.com
                                                                  that sit higher-level APIs, including Keras [6] (more on the
                                                                  Keras.io site [7]) and the Estimator API [8], which makes
                                                                  training and evaluating distributed models easier.
                                                                     And finally, a number of commonly used models are ready
                                                                  to use out of the box, with more to come.

                                                                  TensorFlow execution model

                                                                  Graphs
                                                                  Machine learning can get complex quickly, and deep learn-
                                                                  ing models can become large. For many model graphs, you
                                                                  need distributed training to be able to iterate within a reason-
                                                                  able time frame. And, you’ll typically want the models you
                                                                  develop to deploy to multiple platforms.
                                                                    With the current version of TensorFlow, you write code to
                                                                  build a computation graph, then execute it. The graph is a



Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com                                                                65
LEARNING
                   ..............   ..
                          .............
  data structure that fully describes the computation you want          fixes to large additions like OS X GPU support [14] or the
  to perform. This has lots of advantages:                              OpenCL implementation [15]. (The broader TensorFlow
                                                                        GitHub organization has had nearly 1,000 unique non-
  • It’s portable, as the graph can be executed immediately            Googler contributors.)
     or saved to use later, and it can run on multiple platforms:           Tensorflow has more than 76,000 stars on GitHub, and the
     CPUs, GPUs, TPUs, mobile, embedded. Also, it can be                number of other repos that use it is growing every month—
     deployed to production without having to depend on any of          as of this writing, there are more than 20,000.
     the code that built the graph, only the runtime necessary              Many of these are community-created tutorials, models,
     to execute it.                                                     translations, and projects. They can be a great source of ex-
  • It’s transformable and optimizable, as the graph can be            amples if you’re getting started on a machine learning task.
     transformed to produce a more optimal version for a given              Stack Overflow is monitored by the TensorFlow team, and
     platform. Also, memory or compute optimizations can be             it’s a good way to get questions answered [16] (with 8,000+
     performed and trade-offs made between them. This is use-           answered so far).
     ful, for example, in supporting faster mobile inference after          The external version of TensorFlow internally is no differ-
     training on larger machines.                                       ent than internal, beyond some minor differences. These
  • S upport for distributed execution                                 include the interface to Google’s internal infrastructure (it
                                                                        would be no help to anyone), some paths, and parts that ar-
  TensorFlow’s high-level APIs, in conjunction with computa-            en’t ready yet. The core of TensorFlow, however, is identical.
  tion graphs, enable a rich and flexible development envi-             Pull requests to internal will appear externally within around
  ronment and powerful production capabilities in the same              a day and a half and vice-versa.
  framework.                                                                In the TensorFlow GitHub org [17], you can find not only
                                                                        TensorFlow [18] itself, but a useful ecosystem of other re-
  Eager execution                                                       pos, including models [19], serving [20], TensorBoard [21],
  An upcoming addition to TensorFlow is eager execution [9],            Project Magenta [22], and many more. (A few of these are
  an imperative style for writing TensorFlow. When you enable           described below). You can also find TensorFlow APIs in mul-
  eager execution, you will be executing TensorFlow kernels             tiple languages [23] (Python, C++, Java, and Go); and the
  immediately, rather than constructing graphs that will be ex-         community has developed other bindings [24], including C#,
  ecuted later.                                                         Haskell, Julia, Ruby, Rust, and Scala.
    Why is this important? Four major reasons:
                                                                        Performance and benchmarking
  • Y ou can inspect and debug intermediate values in your             TensorFlow has high standards around measurement and
     graph easily.                                                      transparency. The team has developed a set of detailed
  • You can use Python control flow within TensorFlow APIs—            benchmarks [25] and has been very careful to include all
     loops, conditionals, functions, closures, etc.                     necessary details to reproduce. We’ve not yet run compar-
  • Eager execution should make debugging more straight-               ative benchmarks, but would welcome for others to publish
     forward.                                                           comprehensive and reproducible benchmarks.
  • Eager’s “define-by-run” semantics will make building and              There’s a section [26] of the TensorFlow site with infor-
     training dynamic graphs easy.                                      mation specifically for performance-minded developers. Op-
                                                                        timization can often be model-specific, but there are some
  Once you are satisfied with your TensorFlow code running              general guidelines that can often make a big difference.
  eagerly, you can convert it to a graph automatically. This will
  make it easier to save, port, and distribute your graphs.             TensorFlow’s open source models
    This interface is in its early (pre-alpha) stages. Follow           The TensorFlow team has open sourced a large number
  along on GitHub [10].                                                 of models. You can find them in the tensorflow/models [27]
                                                                        repo. For many of these, the released code includes not
  TensorFlow and the open source software                               only the model graph, but also trained model weights. This
  community                                                             means that you can try such models out of the box, and you
  TensorFlow was open sourced in large part to allow the com-           can tune many of them further using a process called trans-
  munity to improve it with contributions. The TensorFlow team          fer learning [28].
  has set up processes [11] to manage pull requests, review               Here are just a few of the recently released models (there
  and route issues filed, and answer Stack Overflow [12] and            are many more):
  mailing list [13] questions.
    So far, we’ve had more than 890 external contributors               • T
                                                                           he Object Detection API [29]: It’s still a core machine
  add to the code, with everything from small documentation               learning challenge to create accurate machine learning



  66                                                                 Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
  models capable of localizing and identifying multiple ob-       • M
                                                                     ultistyle Pastiche Generator [37] from the Magenta Proj-
  jects in a single image. The recently open sourced Ten-           ect [38]: “Style transfer” is what’s happening under the
  sorFlow Object Detection API [30] has produced state-             hood with those fun apps that apply the style of a painting
  of-the-art results (and placed first in the COCO detection        to one of your photos. This Magenta model extends image
  challenge [31]).                                                  style transfer by creating a single network [39] that can
                                                                    perform more than one stylization of an image, optionally
The out-of-the-box Object Detection model, derived from
                                                                    at the same time. (Try playing with the sliders for the dog
raneko via Flickr, CC BY-2.0.
                                                                    images in this blog post [40].)

                                                                  Style transfer example, derived from Anthony Quintano via
                                                                  Flickr, CC BY 2.0




• tf-seq2seq [32]: Google previously announced Google Neu-
   ral Machine Translation [33] (GNMT), a sequence-to-se-
   quence (seq2seq) model that is now used in Google
   Translate production systems. tf-seq2seq [34] is an open
   source seq2seq framework in TensorFlow that makes              Transfer learning
   it easy to experiment with seq2seq models and achieve          Many of the TensorFlow models [41] include trained
   state-of-the-art results.                                      weights and examples that show how you can use them
• ParseySaurus [35] is a set of pretrained models that re-       for transfer learning [42], e.g. to learn your own classifica-
   flect an upgrade to SyntaxNet [36]. The new models use         tions. You typically do this by deriving information about
   a character-based input representation and are much            your input data from the penultimate layer of a trained
   better at predicting the meaning of new words based            model—which encodes useful abstractions—then use
   both on their spelling and how they are used in context.       that as input to train your own much smaller neural net
   They are much more accurate than their predecessors,           to predict your own classes. Because of the power of the
   particularly for languages where there can be dozens           learned abstractions, the additional training typically does
   of forms for each word and many of these forms might           not require large data sets.
   never be observed during training, even in a very large           For example, you can use transfer learning with the Incep-
   corpus.                                                        tion [43] image classification model to train an image classi-
                                                                  fier that uses your specialized image data.
Asking ParseySaurus to parse a line from Jabberwocky
                                                                     For examples of using transfer learning for medical diag-
                                                                  nosis by training a neural net to detect specialized classes of
                                                                  images, see the following articles:

                                                                  • D eep learning for detection of diabetic eye disease [44]
                                                                  • Deep learning algorithm does as well as dermatologists in
                                                                     identifying skin cancer [45]
                                                                  • Assisting pathologists in detecting cancer with deep learn-
                                                                     ing [46]

                                                                  And, you can do the same to learn your own [47] (potentially
                                                                  goofy [48]) image classifications too.
                                                                     The Object Detection API [49] code is designed to support
                                                                  transfer learning as well. In the tensorflow/models [50] repo,



Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com                                                               67
LEARNING
                    ..............   ..
                           .............
  there is an example [51] of how you can use transfer learning           [XLA [57]], a domain-specific compiler for linear algebra that
  to bootstrap this trained model to build a pet detector [52],           optimizes TensorFlow computations.)
  using a (somewhat limited) data set of dog and cat breed                   One of the TensorFlow projects, MobileNet [58], is devel-
  examples. And, in case you like raccoons more than dogs                 oping a set of computer vision models that are particularly
  and cats, see this tutorial [53] too.                                   designed to [59] address the speed/accuracy trade-offs that
                                                                          need to be considered on mobile devices or in embedded
  The “pet detector” model, trained via transfer learning, derived
                                                                          applications. The MobileNet models can be found in the Ten-
  from raneko via Flickr, CC BY-2.0
                                                                          sorFlow models repo [60] as well.
                                                                             One of the newer Android demos, TF Detect [61], uses a
                                                                          MobileNet model trained using the Tensorflow Object Detec-
                                                                          tion API.
                                                                             And of course we’d be remiss in not mentioning “How
                                                                          HBO’s ‘Silicon Valley’ built ‘Not Hotdog’ with mobile Tensor-
                                                                          Flow, Keras, and React Native [62].”

                                                                          The TensorFlow ecosystem
                                                                          The TensorFlow ecosystem includes many tools and librar-
                                                                          ies to help you work more effectively. Here are a few.

                                                                          TensorBoard
                                                                          TensorBoard [63] is a suite of web applications for inspect-
                                                                          ing, visualizing, and understanding your TensorFlow runs
                                                                          and graphs. You can use TensorBoard to view your Tensor-
  Using TensorFlow on mobile devices                                      Flow model graphs and zoom in on the details of graph sub-
  Mobile is a great use case for TensorFlow—mobile makes                  sections.
  sense when there is a poor or missing network connection                  You can plot metrics like loss and accuracy during a train-
  or where sending continuous data to a server would be too               ing run; show histogram visualizations of how a tensor is
  expensive. But, once you’ve trained your model and you’re               changing over time; show additional data, like images; col-
  ready to start using it [54], you don’t want the on-device mod-         lect runtime metadata for a run, such as total memory usage
  el footprint to be too big.                                             and tensor shapes for nodes; and more.
     TensorFlow is working to help developers make lean mo-                 TensorBoard works by reading TensorFlow files that
  bile apps [55], both by continuing to reduce the code foot-             contain summary information [64] about the training pro-
  print and by supporting quantization [56].                              cess. You can generate these files when running Tensor-
                                             (And although it’s ear-      Flow jobs.
  Image by Google.com
                                          ly days, see also Accel-          You can use TensorBoard to compare training runs, collect
                                          erated Linear Algebra           runtime stats, and generate histograms [65].

                                         Image by Google.com




  68                                                                   Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com
Image by Google.com
                                                                 feature and see at a glance how the features interact with
                                                                 each other. For example, you can view your training and
                                                                 test datasets (as is done here with some Census [79] data),
                                                                 compare the characteristics of each feature, and sort the
                                                                 features by “distribution distance.”

                                                                 Inspecting Census data with Facets. Image by Google.com




  A particularly mesmerizing feature of TensorBoard is its
embeddings visualizer [66]. Embeddings [67] are ubiqui-
tous [68] in machine learning, and in the context of Tensor-
Flow, it’s often natural to view tensors as points in space,
so almost any TensorFlow model will give rise to various
embeddings.

TensorBoard Embedding Video [69]



                                                                 Cloud Datalab includes Facets integration. This GitHub link [80]
                                                                 has a small example of loading a NHTSA Traffic Fatality [81]
                                                                 BigQuery [82] public dataset [83] and viewing it with Facets.

                                                                 Image by Google.com




Datalab
Jupyter [70] notebooks are an easy way to interactively
explore your data, define TensorFlow models, and kick off
training runs. If you’re using Google Cloud Platform tools
and products as part of your workflow—maybe using Goo-
gle Cloud Storage [71] or BigQuery [72] for your datasets,
or Apache Beam [73] for data preprocessing [74]—then             In Facets’ Dive view we can quickly see which states have
Google Cloud Datalab [75] provides a Jupyter-based en-           the most traffic fatalities and that the distribution of collusion
vironment with all of these tools (and others like NumPy,        type appears to change as the number of fatalities per acci-
pandas, scikit-learn, and Matplotlib), along with Tensor-        dent increases.
Flow, preinstalled and bundled together. Datalab is open
source [76], so if you want to further modify its notebook       And more …
environment, it’s easy to do.                                    Another useful diagnostic tool is the TensorFlow debug-
                                                                 ger [84], tfdbg, which lets you view the internal structure
Facets                                                           and states of running TensorFlow graphs during training
Machine learning’s power comes from its ability to learn         and inference.
patterns from large amounts of data, so understanding               Once you’ve trained a model that you’re happy with, the
your data can be critical to building a powerful machine         next step is to figure out how you’ll serve it in order to scalably
learning system.                                                 support predictions on the model. TensorFlow Serving [85]
  Facets [77] is an open source data visualization tool [78]     is a high-performance serving system for machine-learned
that helps you understand your machine learning datasets         models, designed for production environments. It has re-
and get a sense of the shape and characteristics of each         cently [86] moved to version 1.0.



Open Source Yearbook 2017    . CC BY-SA 4.0 . O  pensource.com                                                                   69
LEARNING
                   ..............   ..
                          .............
     There are many other tools and libraries that we don’t have       [35]	https://research.googleblog.com/2017/03/an-upgrade-to-
  room to cover here, but see the TensorFlow GitHub org [87]                 syntaxnet-new-models-and.html
  repos to learn about them.                                           [36]	https://research.googleblog.com/2016/05/announcing-
     The TensorFlow site [88] has many getting started [89]                  syntaxnet-worlds-most.html
  guides, examples, and tutorials [90]. (A fun new tutorial is         [37] https://magenta.tensorflow.org/2016/11/01/multistyle-
  this [91] audio recognition example.)                                      pastiche-generator/
                                                                       [38]	https://magenta.tensorflow.org/
  Links                                                                [39]	https://github.com/tensorflow/magenta/tree/master/
  [1]	https://www.tensorflow.org/                                           magenta/models/image_stylization
  [2]	https://research.googleblog.com/2017/02/announcing-             [40] https://magenta.tensorflow.org/2016/11/01/multistyle-
       tensorflow-10.html                                                    pastiche-generator/
  [3] https://www.blog.google/topics/google-cloud/google-cloud-       [41] https://github.com/tensorflow/models
       offer-tpus-machine-learning/                                    [42]	https://en.wikipedia.org/wiki/Transfer_learning
  [4]	https://www.tensorflow.org/tfrc/                                [43]	https://www.tensorflow.org/tutorials/image_retraining
  [5]	https://www.tensorflow.org/tutorials/layers/                    [44]	https://research.googleblog.com/2016/11/deep-learning-
  [6]	https://www.tensorflow.org/versions/master/api_docs/                  for-detection-of-diabetic.html
       python/tf/contrib/keras                                         [45]	http://news.stanford.edu/2017/01/25/artificial-intelligence-
  [7]	https://keras.io/                                                     used-identify-skin-cancer
  [8] https://www.tensorflow.org/get_started/estimator                [46]	https://research.googleblog.com/2017/03/assisting-
  [9]	 https://developers.googleblog.com/2017/10/eager-                     pathologists-in-detecting.html
        execution-imperative-define-by.html                            [47]	https://www.tensorflow.org/tutorials/image_retraining
  [10]	https://github.com/tensorflow/tensorflow/tree/master/          [48]	http://amygdala.github.io/ml/2017/02/03/transfer_learning.
        tensorflow/contrib/eager                                             html
  [11] https://www.oreilly.com/ideas/how-the-tensorflow-team-         [49]	http://research.googleblog.com/2017/06/supercharge-
        handles-open-source-support                                          your-computer-vision-models.html
  [12]	https://stackoverflow.com/questions/tagged/tensorflow          [50]	https://github.com/tensorflow/models
  [13]	https://groups.google.com/a/tensorflow.org/forum/#!forum/      [51]	https://github.com/tensorflow/models/blob/master/
        discuss                                                              research/object_detection/g3doc/running_pets.md
  [14]	https://github.com/tensorflow/tensorflow/pull/664              [52]	https://cloud.google.com/blog/big-data/2017/06/training-
  [15] https://github.com/tensorflow/tensorflow/pull/9117                   an-object-detector-using-cloud-machine-learning-engine
  [16] https://stackoverflow.com/questions/tagged/tensorflow           [53]	https://medium.com/towards-data-science/how-to-train-
  [17]	https://github.com/tensorflow                                        your-own-object-detector-with-tensorflows-object-detector-
  [18]	https://github.com/tensorflow/tensorflow                             api-bec72ecfe1d9
  [19]	https://github.com/tensorflow/models                           [54] https://petewarden.com/2016/09/27/tensorflow-for-mobile-
  [20]	https://github.com/tensorflow/serving                                poets/
  [21]	https://github.com/tensorflow/tensorboard                      [55]	https://github.com/tensorflow/tensorflow/tree/master/
  [22] https://github.com/tensorflow/magenta                                tensorflow/examples/android/
  [23]	https://www.tensorflow.org/api_docs/                           [56]	https://www.tensorflow.org/performance/quantization
  [24]	https://www.tensorflow.org/api_docs/                           [57] https://www.tensorflow.org/performance/xla/
  [25] https://www.tensorflow.org/performance/benchmarks              [58]	https://research.googleblog.com/2017/06/mobilenets-
  [26]	https://www.tensorflow.org/performance/performance_                  open-source-models-for.html
        models                                                         [59] https://arxiv.org/abs/1611.10012
  [27] https://github.com/tensorflow/models                           [60]	https://github.com/tensorflow/models/blob/master/
  [28]	https://www.tensorflow.org/tutorials/image_retraining                research/slim/nets/mobilenet_v1.md
  [29]	http://research.googleblog.com/2017/06/supercharge-            [61]	https://github.com/tensorflow/tensorflow/blob/master/
        your-computer-vision-models.html                                     tensorflow/examples/android/src/org/tensorflow/demo/
  [30]	https://github.com/tensorflow/models/tree/master/                    DetectorActivity.java
        research/object_detection                                      [62]	https://medium.com/@timanglade/how-hbos-silicon-valley-
  [31] http://mscoco.org/dataset/#detections-leaderboard                    built-not-hotdog-with-mobile-tensorflow-keras-react-native-
  [32]	https://research.googleblog.com/2017/04/introducing-tf-              ef03260747f3
        seq2seq-open-source.html                                       [63]	https://github.com/tensorflow/tensorboard/blob/master/
  [33]	https://research.googleblog.com/2016/09/a-neural-                    README.md
        network-for-machine.html                                       [64]	https://www.tensorflow.org/get_started/summaries_and_
  [34]	https://google.github.io/seq2seq/                                    tensorboard



  70                                                                Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
[65]	https://www.tensorflow.org/get_started/tensorboard_         [82]	https://cloud.google.com/bigquery/
      histograms                                                  [83] https://cloud.google.com/bigquery/public-data/
[66]	https://www.tensorflow.org/get_started/embedding_viz        [84]	https://www.tensorflow.org/programmers_guide/debugger
[67]	http://colah.github.io/posts/2014-10-Visualizing-MNIST/     [85]	https://www.tensorflow.org/serving
[68] https://www.tensorflow.org/tutorials/word2vec               [86]	https://developers.googleblog.com/2017/07/tensorflow-
[69]	https://www.tensorflow.org/images/embedding-mnist.mp4             serving-10.html
[70]	https://jupyter.org/                                        [87]	https://github.com/tensorflow
[71] https://cloud.google.com/storage/                            [88]	https://www.tensorflow.org
[72] https://cloud.google.com/bigquery/                         [89]	https://www.tensorflow.org/get_started/
[73]	https://beam.apache.org/                                    [90]	https://www.tensorflow.org/tutorials/
[74]	https://github.com/GoogleCloudPlatform/cloudml-samples/     [91]	https://www.tensorflow.org/versions/master/tutorials/
      blob/master/flowers/pipeline.py#L201                              audio_recognition
[75]	https://cloud.google.com/datalab/
[76] https://github.com/googledatalab/datalab
[77] https://research.googleblog.com/2017/07/facets-open-        Author
      source-visualization-tool.html                              Amy is a developer relations engineer for the Google Cloud
[78] https://pair-code.github.io/facets/                         Platform, where she focuses on machine learning and data
[79]	http://archive.ics.uci.edu/ml/datasets/Census+Income        analytics as well as other Cloud Platform technologies. Amy
[80]	https://github.com/amygdala/code-snippets/blob/master/      has an academic background in CS/AI and has also worked
      datalab/facets/facets_snippets.ipynb                        at several startups, done industrial R&D, and published a
[81] https://cloud.google.com/bigquery/public-data/nhtsa         book on App Engine.




    The conversation continues—in the open.
    Download free books today, and join us at
    opensource.com/open-organization




Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com                                                            71
LEARNING
                   ..............   ..
                          .............




  Is blockchain
  a security topic?
                                                        BY MIKE BURSELL


  Yet again, we need to understand how systems and the business
  work together and be honest about the fit.




 BLOCKCHAINS                       are big news at the moment.
                                   There are conferences, start-
  ups, exhibitions, open source projects (in fact, pretty much
                                                                         There’s another, equally important one, however, which
                                                                      is about identity.5 Now, for many blockchain-based cryp-
                                                                      to-currencies, a major part of the point of using them at all
  all of the blockchain stuff going on out there is open source—      is that identity isn’t, at one level, important. There are many
  look at Ethereum, Zcash, and Bitcoin as examples); all we           actors in a cryptocurrency who may be passing each other
  need now are hipster-run blockchain-themed cafés1 If you’re         vanishingly small or eye-wateringly big amounts of money,
  looking for an initial overview, you could do worse than the        and they don’t need to know each other’s identity in order
  Wikipedia entry [1]—but that’s not the aim of this post.            to make transactions.
     Before we go much further, one useful thing to know about           To be clearer, the uniqueness of each actor absolutely
  many blockchain projects is that they aren’t. Blockchains,          is important—I want to be sure that I’m sending money to
  that is. They are, more accurately, distributed ledgers.4 For       the entity who has just rendered me a service—but be-
  now, however, let’s roll in blockchain and distributed ledger       ing able to tie that unique identity to a particular person
  technologies and assume                                                                                IRL6 is not required. To use
  we’re talking about the same                                                                           the technical term, such a
  thing: it’ll make it easier for                                                                        system is pseudonymous.
  now, and in most cases, the                                                                            Now, if pseudonymity is a
  difference is immaterial for                                                                           key part of the system, then
  our discussion.                                                                                        protecting that property is
     I’m not planning to go                                                                              likely to be important to its
  into the basics here, but we                                                                           users. Cryptocurrencies do
  should briefly talk about the                                                                          this with various degrees of
  main link with crypto and                                                                              success. The lesson here is
  blockchains, and that’s the                                                                            that you should do some se-
  blocks themselves. To build                                                                            rious reading and research
  a block—a set of transac-                                                                              if you’re planning to use a
  tions to put into a blockchain—and then link it into the            cryptocurrency and this property matters to you.
  blockchain, cryptographic hashes are used. This is the                 On the other hand, there are many blockchain/distributed
  most obvious relationship that the various blockchains              ledger technologies where pseudonymity is not a required
  have with cryptography.                                             property and may actually be unwanted. These are the types



  72                                                               Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
of systems in which I am most generally interested from a               These are all issues that are well catered for within ex-
professional point of view.                                             isting legal frameworks (with the possible exception of
   In particular, I’m interested in permissioned blockchains.           the first), but that are more difficult to manage within the
Permissionless (or non-permissioned) blockchains are those              sorts of systems we are generally concerned with in this
where you don’t need permission from anyone in order to                 article.
participate. You can see why pseudonymity and permission-                  Please don’t confuse the issues noted above with the
less blockchains can fit well today: most (all?) cryptocurren-          questions around how to map legal agreements to the so-
cies are permissionless. Permissioned blockchains are a                 called “smart contracts” in blockchain/distributed ledger sys-
different kettle of fish, however, and they’re the ones many            tems. That’s another thorny (and, to be honest, not uncon-
businesses are looking at now. In these cases, you know                 nected) issue, but this one goes right to the heart of what a
the people or entities who are going to be participating—               system is, and it’s the reason that people need to think very
or, if you don’t know now, you’ll want to check on them and             hard about what they’re really trying to achieve when they
their identity before they join your blockchain (or distributed         adopt the latest buzzword technology. Yet again, we need
ledger). And here’s why blockchains are interesting in busi-            to understand how systems and the business work together
ness.7 It’s not just that identity is interesting, although it is,      and be honest about the fit.
because how you marry a particular entity to an identity and
make sure that this binding is not spoofable over the lifetime          1 If you come across one of these, please let me know. Put
of the system is difficult, difficult, lemon difficult 8—but there’s       a picture in a comment or something.2
more to it than that.                                                   2 Even better—start one yourself. Make sure I get an invita-
   What’s really interesting is that, if you’re thinking about             tion to the opening.3
moving to a permissioned blockchain or distributed ledger               3 And free everything.
with permissioned actors, then you’re going to have to spend            4 There have been online spats about this. I’m not joining in.
some time thinking about trust [2]. You’re unlikely to be using         5 There are others, but I’ll save those for another day.
a proof-of-work system for making blocks—there’s little point           6 IRL = “in real life.” I’m so old-skool.
in a permissioned system—so who decides what comprises                  7 F or me. If you’ve got this far into the article, I’m hoping
a “valid” block that the rest of the system should agree on?               there’s an even chance that the same will go for you, too.
Well, you can rotate around some (or all) of the entities, or           8 I’ll leave this as an exercise for the reader. Watch it,
you can have a random choice, or you can elect a small                     though, and the TV series [3] on which it’s based. Unless
number of rusted entities. Combinations of these schemes                   you don’t like swearing, in which case don’t watch either.
may also work.
   If these entities all exist within one trust domain, which           This article originally appeared on Alice, Eve, and Bob—a
you control, then fine, but what if they’re distributors, or cus-       security blog [4] and is republished with permission.
tomers, or partners, or other banks, or manufacturers, or
semi-autonomous drones, or vehicles in a commercial fleet?              Links
You really need to ensure that the trust relationships that             [1]	https://en.wikipedia.org/wiki/Blockchain
you’re encoding into your implementation/deployment truly               [2]	https://aliceevebob.wordpress.com/2017/05/09/what-is-
reflect the legal and IRL trust relationships that you have with             trust-with-apologies-to-pontius-pilate/
the entities that are being represented in your system.                 [3] https://en.wikipedia.org/wiki/In_the_Loop
   And the problem is that, once you’ve deployed that sys-              [4]	https://aliceevebob.com/2017/06/13/is-blockchain-a-
tem, it’s likely to be very difficult to backtrack, adjust, or reset         security-topic/
the trust relationships that you’ve designed. And if you don’t          [5]	https://opensource.com/article/17/11/politics-linux-desktop
think about the questions I noted above, about long-term
bindings of identity, you’re going to be in for some serious
problems when, for instance:                                            Author
                                                                        I’ve been in and around Open Source since around 1997,
• a
   n entity is spoofed                                                 and have been running (GNU) Linux as my main desktop at
• a
   n entity goes bankrupt                                              home and work since then: not always easy… I’m a secu-
• a
   n entity is acquired by another entity (buyouts,                    rity bod and architect, and am currently employed as Chief
  acquisitions, mergers, etc.)                                          Security Architect for Red Hat. I have a blog—“Alice, Eve &
• a
   n entity moves into a different jurisdiction                        Bob” [5] — where I write (sometimes rather parenthetically)
• a
   legislation or regulation changes.                                  about security. I live in the UK and like single malts.




Open Source Yearbook 2017       . CC BY-SA 4.0 . O      pensource.com                                                                 73
CREATING
                   ..............   ..
                          .............

  Top open source solutions
  for designers and artists
  from 2017                                                                                       BY ALAN SMITHEE


  We collected popular 2017 Opensource.com articles about exciting
  developments in open source solutions for designers and artists.


  SOMETIMES                   it seems no one will take you se-
                              riously in the art world should you
  dare deviate from the prescribed toolset of a “real artist,” but
                                                                        interactive or generative art installations regardless of bud-
                                                                        get. One such example is outlined by Zack Akil in his article
                                                                        on using an Arduino and Raspberry Pi to turn a fiber optic
  they used to say the same thing about Linux in the server             neural network into wall art [5]. The article’s title follows the
  room, on phones, and on laptops, and here we are today                Opensource.com tradition of humility, as Zack leverages 3D
  running the internet, Android, and Chromebooks on Linux.              printing, a micro-controller, a tiny server, and machine learn-
     More and more, Linux and open source are popping up as             ing to create a glowing, plasma-like, generative art display.
  legitimate options in art and design. That said, the art world,
  ironically seen as a disruptively progressive community,              Code
  still has a long way to go before open source is its default,         Jason van Gumster continued his series on Python tricks for
  but headway is being made. Opensource.com published a                 artists with a lesson on How to add interactivity to any Py-
  variety of articles in 2017 that highlight how truly capable,         thon script [6]. Traditionally, an artist might have abused their
  flexible, and exciting the open source software user’s design         medium as a way to show how progressive their art is, and
  toolset really is.                                                    the more that modern artists embrace technology, the more
                                                                        we realize that a significant portion of modern art is bending
  Web design                                                            the tools themselves. That’s what Jason’s series has demon-
  In 2016, Jason Baker looked at four alternatives to Ado-              strated, and hopefully tech-savvy artists have taken note of
  be’s Dreamweaver, and this year he expanded that review               how easy, and yet powerful, Python is as an artistic tool.
  to 7 open source alternatives to Dreamweaver [1]. The title              To that point, Jeff Macharyas’s article on 2 open source
  is humble, because he mentions far more than seven al-                Adobe InDesign scripts [7] (which actually covers three great
  ternatives, but the real star of the article is BlueGriffon [2].      open source tools), demonstrates how he benefits from open
  BlueGriffon delivers exactly what the article promises: a true        source even when working within a proprietary toolchain.
  WYSIWYG, HTML5-compliant alternative to Dreamweaver.                  Macharyas shows how he “fixed” major flaws in the propri-
  When people think “open source Dreamweaver,” BlueGriffon              etary software’s workflow with open source scripts. It’s al-
  is exactly what they have in mind.                                    most as if open source is the default of human nature, and
                                                                        proprietary software is out of step.
  Technical design
  One of the more technical areas within art and design is              Print and graphic design
  CAD, the backbone of architecture, civil engineering, and             When most people think of design, they first imagine graphic
  industrial design. Jason Baker’s article 3 open source al-            design and page layout. That’s the side of design that we see
  ternatives to AutoCAD [3] takes a look at (more than) three           on an everyday basis; we see it at the supermarket, at bus
  CAD applications. This was conveniently followed by an in-            stops, on billboards, and on the magazine rack. Since the
  depth tutorial on drawing primitive shapes with BRL-CAD [4]           products of this labor are often, by degrees, disposable, this
  by Isaac Kamga, in which the geometry and code behind a               is an active area of the arts.
  heart-shaped primitive are explained in great detail.                    GIMP is a mainstay of open source graphics work. Seth
    Walk into 10 modern art galleries, and you’re likely to see         Kenlon shares 7 must-have GIMP brushes [8], and Carl
  LEDs or a micro-controller in at least four. The Arduino and          Hermansen describes how GIMP has literally changed his
  Raspberry Pi have opened new and exciting avenues for                 life [9]. Well, suffice it to say that he’s not the only one.



  74                                                                 Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
   Greg Pittman’s articles on ImageMagick provide some                     Adam Hyde explores this theme in his article about the so-
great image viewing tricks [10], plus a getting started tuto-           called itch-to-scratch [19] development model. The idea is that
rial [11]. ImageMagick, for its scriptability, is one of the most       users who have a problem are most likely to get involved in fix-
powerful graphics tools available, so getting familiar with it is       ing that problem, so those users must be invited into the devel-
worth an investment in time and effort.                                 opment process. It’s a great read that exposes several potential
   Seth Kenlon’s article on 8 ways to turn cheating into an             blind spots in the typical open source development process.
art form [12] takes a broad approach to improving your open                Last, but not least, is the success story of how a popular
source animations with common tricks of the trade that are              web comic was adapted into an animation [20], thanks to an
visible in all the old Saturday morning classics. Even if you           open license. It’s all well and good to praise open source and
don’t animate, the article’s worth a read for nostalgia alone.          open culture, but it’s particularly nice to highlight a project
In another article, he broadens the scope of graphic design             that actually takes advantage of it. There are also several
by exploring tabletop game design [13] using open source                tips about open source tools like the COA-Tools [21] Blender
tools like Synfig, Inkscape, Scribus, and much more.                    add-on in the interview.
   In a more technical article, author RGB-es explains every-              Clearly, open source art and design has been an exciting
thing you ever wanted to know about the OpenType font sys-              and fruitful topic in 2017, and I’ll venture to say that 2018 will
tem [14]. Even if this has never been on your list of things to         be even better!
learn, give it a read because it’s great background knowledge.
                                                                        Links
Project management                                                      [1]	 https://opensource.com/alternatives/dreamweaver
Artists don’t just deal with technology, they also have to deal         [2]	 http://www.bluegriffon.org
with practical concerns like time and space. Few artists love be-       [3]	https://opensource.com/alternatives/autocad
ing project managers, and that’s why a good set of open source          [4]	 https://opensource.com/article/17/4/primitive-shapes-BRL-CAD
tools is so useful. Jason van Gumster’s article on mind map-            [5]	 https://opensource.com/article/17/10/fiber-optic-neural-
ping [15] looks at all the different aspects of getting your artistic         network-art
ideas organized. It covers several tools and several ideas about        [6]	 https://opensource.com/article/17/3/python-tricks-artists-
the subject, and it’s useful whether you’re an artist or not.                 interactivity-Python-scripts
  Seth Kenlon covers Planter [16], a system used to orga-               [7]	 https://opensource.com/article/17/3/scripts-adobe-indesign
nize the assets of an art project. It may be something you              [8]	 http://opensource.com/article/17/10/7-must-have-gimp-
don’t think much about if you do one or two projects a year,                  brushes
but it’s a serious concern if you’re working on artistic projects       [9]	 https://opensource.com/article/17/6/gimp-10-ways
every week. A tool like Planter lets you use and reuse assets           [10]	http://opensource.com/article/17/9/imagemagick-viewing-
across several projects without constant duplication. If noth-                images
ing else, the article might make you reconsider the way you             [11]	http://opensource.com/article/17/8/imagemagick
organize your data, and what better to do over the weekend              [12]	http://opensource.com/article/17/5/animation-magician-
than reorganize your digital closet?                                          how-turn-cheating-art-form
                                                                        [13]	http://opensource.com/article/17/10/designing-tabletop-
Musing and analysis                                                           games-open-source
Art and technology can sometimes have a strained relation-              [14]	https://opensource.com/article/17/11/opentype
ship. Artists may or may not care about the tech they use,              [15] http://opensource.com/article/17/8/mind-maps-creative-
and if they care too much about it, they risk losing their “art-              dashboard
ist” label for something more tech-centric, like “geek.” Like-          [16] http://opensource.com/article/17/7/managing-creative-
wise, technologists who care about art may risk it being seen                 assets-planter
as computer exercises or excuses for idle programming. It’s             [17]	https://opensource.com//article/17/2/history-free-software-
a constant struggle.                                                          computer-graphics-development
   Julia Velkova analyzes this, and much, much more, in her             [18] https://opensource.com/article/17/6/ura-design-open-
article on rewriting the history of free software and computer                source-projects
graphics [17]. She takes a look at how and why professional             [19]	https://opensource.com//article/17/4/itch-to-scratch-model-
computer graphics evolved over the years, and where the                       user-problems
popularity of CGI has left independent producers.                       [20]	http://opensource.com/article/17/6/web-comic-open-
   In an attempt to make open source software less intimidat-                 license
ing, the design firm Ura Design has donated work [18] to several        [21]	https://github.com/ndee85/coa_tools
open source projects. Justin W. Flory uses Ura’s story as a great
example of how artists interested in open source can connect the        Author
technical teams behind the code to a non-technical audience.            I like my privacy.



Open Source Yearbook 2017        . CC BY-SA 4.0 . O     pensource.com                                                                  75
CREATING
                   ..............   ..
                          .............



 How to use Pulse to
 manage sound on Linux
                                                                                                BY SETH KENLON

  Learn how audio on Linux works and why you should consider Pulse to manage it.




 IT HAPPENS
  eryone, and usually only
                             t o
                             ev-
                                                                                                            For most everyday tasks,
                                                                                                         doing all this translates to
                                                                                                         using an application (like
  when it matters the most.                                                                              VLC Media Player [1], for
  You might be gearing up for a                                                                          instance) generating sound
  family video chat, settling in                                                                         and a device (like your
  for a movie night on your big-                                                                         speakers or headphones)
  screen TV, or getting ready                                                                            receiving that sound and de-
  to record a tune that popped                                                                           livering it to your ears.
  into your head and needs                                                                                  The other way round is
  freeing. At some point, if you                                                                         basically the same; a device
  use a computer, sound is going to need to be routed.                (like a microphone) generates sound and sends it to an ap-
                                                                      plication (like Jitsi video chat [2] or the Qtractor [3] DAW) for
  How Linux audio works                                               processing.
  Without going into technical detail, here’s a map of how               No matter what, the model is always the same. Sound is
  Linux audio works.                                                  generated by one thing and sent to another.
                                                                         Between those two end points exists a Linux sound sys-
                                                                      tem, because, after all, something needs to route the sound.
                                                                         Without going too far back in history, Advanced Linux
                                                                      Sound Architecture (ALSA) [4] traditionally managed Linux
                                                                      audio. In fact, ALSA still manages Linux audio. The differ-
                                                                      ence is that on modern Linux, users don’t generally need to
                                                                      deal directly with ALSA to route sound. Instead, they can use
                                                                      tools sitting on top of ALSA, like Pulse Audio [5].
                                                                         If you have sound working on your Linux machine on
                                                                      a daily basis, but you get thrown off balance when you
                                                                      need to get specific about sound inputs and outputs, read
                                                                      on. This is not an article about how to install drivers or
                                                                      set sound defaults. If you want to know more about that
                                                                      level of sound configuration, visit support forums such
                                                                      as Linux Questions [6] and documentation sites such as
  First of all, there’s a source and there’s a target: something      Slackermedia [7] to help you. This article is about getting
  is making sound and something else is supposed to receive           comfortable with the sound controls of a modern Linux
  and process that sound.                                             system.



  76                                                               Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
Why Pulse?                                                         Step 2: Check application preferences
Why is Pulse necessary?                                            Similar to checking cables and knobs, check the settings of
   Strictly speaking, it isn’t. ALSA works so well that some       the sound application you’re using on your computer. Not all
distributions are just starting to integrate Pulse by default.     applications give you much of a choice, but there’s usually
However, dealing directly with ALSA can require a lot of           has some kind of menu somewhere governing what the ap-
manual hacking. I’m not talking about the initial setup. Us-       plication does with its sound. VLC, for example, gives you
ing ALSA could result in some pretty convoluted configs and        lots of choices:
wrapper-scripts, and you still never get one configuration to
serve your every use case. The problem wasn’t always with
ALSA. Sometimes it was the fault of the application [8] itself,
but that doesn’t change the end result. Your box was still
“broken” until you could swap out config files and restart the
service.
   The thing is, we’re demanding a lot more of our comput-
ers now than ever before. Audio output used to be either a
speaker or headphones, but now we want our computer to
beam audio across the room to the screen we use as a TV
and to pick up audio from a Bluetooth mic in a phone.
   Pulse sits patiently between the thing that is generating
sound and the thing meant to receive that sound, making            While an application like Google Hangouts gives you a
sure everything plays nicely with one another. It adds several     simplified view:
bonus features, too, such as the ability to send audio to a
different computer and invisibly changing the sample format
or channel count.

Learning Pulse
To get comfortable with Pulse, you need to remember three
things:

1. Check your cables (virtual and physical)
2. Set sound input or output from source of sound
3. Manage your targets from Pulse Audio Control
    (pavucontrol)

Step 1: Check cables and hardware
Check your cables. Check volume knobs. Check mute but-
tons and power buttons. You’re living in the “turn it off, and
then on again” school of audio engineering.
                                                                   The point is, you need to decide where your sound is headed
                                                                   once it leaves its parent application. Make sure it’s set sanely.
                                                                     If you’re confused by all the choices, it’s usually safe to
                                                                   send sound to Pulse.

                                                                   • S end sound to Pulse to benefit from Pulse’s simplified
                                                                      worldview. You can send it to Pulse and manage it from
                                                                      Pulse’s control panel—Pulse’s job is to manage sound dy-
                                                                      namically.
                                                                   • Send sound to ALSA if you want direct control. This may
Admit it. You’ve done this once or twice yourself, too.               be important if you’re using pro apps, like a soft synth and
  If you left your headphones plugged in, or you forgot to            an effects rack and a DAW, and you need absolute control
power on your speakers, or turned the volume down on                  over channel routing (with JACK or Patchage, for instance)
your speaker or the application playing sound, then spend-            and processing order.
ing time and effort configuring your system is pointless. Do       • Pulse has an ALSA plug-in, so even if your first choice
the “dummy check” first.                                              as a destination is ALSA, you’ll still have some ability to



Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                 77
CREATING
                    ..............   ..
                           .............
    manage that sound from Pulse. Pulse doesn’t “steal” your               favorite music player and and playing some music. Then
    audio, so you don’t have to worry about Pulse intercepting             open pavucontrol (remember, it may be located in the
    your signal and re-routing it someplace else. Pulse always             GNOME or KDE System Setting > Sound panel on your dis-
    respects the choices made at lower levels (and ALSA is                 tro) and click the Configuration tab.
    about as low as you can get in the sound system, drivers                  In the Configuration tab, take note of what device is the
    notwithstanding).                                                      active one, and what profile it is using. Mine is Built-in Audio
                                                                           set to Analog Stereo Duplex, but yours may be different.
  Step 3: Pulse audio volume control (pavucontrol)                            Once you’ve got that jotted down somewhere, change it
  The nerve center of Pulse Audio is pavucontrol, more com-                to Off, and sure enough, the music stops. Well, it doesn’t
  monly known as “the sound control panel,” because its de-                actually stop, it’s just not being heard by you because you
  fault home is in Gnome’s System Settings. (It’s also avail-              “un-set” your default active output. Change the setting from
  able as pavucontrol-qt for KDE System Settings.) It can be               Off to whatever it was before, and your music returns.
  installed and invoked as a standalone application, too, so
  remember its official title.
     You use pavucontrol on a daily basis to set sound lev-
  els and routing on your computer. It’s listed as step 3 in
  my list of things to do, but realistically it’s your first stop for
  normal, everyday sound management (in fact, when you
  adjust the volume on the Gnome desktop, you’re tapping
  into these same controls, so you use it daily whether you
  realize it or not).                                                      As you can see, the Configuration tab sets the primary out-
     pavucontrol is a dynamic panel consisting of five tabs:               put for your system. For that reason, it’s the first panel you
                                                                           should check after installing a new graphics card; HDMI is
  • C onfiguration: activates sound cards and defines the                 infamous for trying to steal away priority from onboard sound
     usage profile. On my desktop machine, for instance, I                 cards. Otherwise, once it’s set, it stays basically unchanged
     generally have HDMI de-activated and my built-in an-                  until you install something new, or have the desire to add or
     alog card on and set to Stereo Duplex. You won’t often                change output devices.
     use this panel; it’s mostly something you set once and                   Now for something more complex: let’s hijack the sound
     forget about.                                                         playing on your own computer and record it to a file.
  • Input Devices: currently available input devices (anything               Launch Audacity [9] and set its input source to Pulse.
     capable of making sound). These usually consist of a mi-
     crophone (very common on laptops, which usually have
     a built-in mic for the webcam), a line-in, and a “monitor”
     device for whatever is currently playing on your system
     (more on that later).
  • O utput Devices: currently available output targets, such as
     desktop speakers and headphones (plugged into Line Out
     ports), and USB headsets.
  • R ecording: currently active recording sessions. This might
     be a web browser looking for sound input for a video chat
     session, or it might be a recording application like Audaci-
     ty. If it’s got a socket open for sound, it’s here.
  • P layback: currently active sounds streams being played. If           Audacity > Edit > Preferences
     it’s meant to be heard, then it’s here.                                   Press the Record button or go to the Transport menu >
                                                                           Record.
  The important thing to remember about pavucontrol is                         At first, you should notice that you’re recording silence.
  that it is dynamic. If Audacity isn’t recording, then it won’t           Switch over to pavucontrol and navigate to the Recording
  show up in the Recording tab. If XMMS isn’t playing, then it             tab.
  won’t show up in the Playback tab. If your USB headset isn’t                 In the Recording tab, click the drop-down menu on the
  plugged in, then it won’t show up in the Input or Output tabs.           right and change the sound source from Built-In Stereo
                                                                           (or whatever yours is set to, depending on your system de-
  Routing sound with pavucontrol                                           faults) to Monitor of. This sets the source of the sound from
  Routing sound in pavucontrol is done entirely through drop-              the physical device (in my case, the desktop speakers that
  down menus. First, try something simple by launching your                I listen to music from) to a software monitor of that device.



  78                                                                    Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
Check Audacity again and you’ll find that you’re intercepting       Yes, it’s 2017 and Linux can play sound, but it can do more
and recording your own system.                                    than that: it can manage sound. You can, too, as long as you
                                                                  learn the tools and, as always, don’t panic.
Web input and other sounds problems
The same process holds true for video chatting with               Links
friends. If Pulse doesn’t know to send the input from your        [1] http://www.videolan.org/vlc/index.html
USB headset or your webcam mic to your web browser or             [2]	 https://jitsi.org/
video chat application, then unless it just happens to be         [3] https://qtractor.sourceforge.io/
the default anyway, the sound isn’t going to reach your           [4]	 http://www.alsa-project.org/main/index.php/Main_Page
video chat application.                                           [5]	 https://www.freedesktop.org/wiki/Software/PulseAudio
   The same is true for playing audio. If you’re playing a        [6]	 http://linuxquestions.org/
movie and not hearing the sound, check Pulse! It could be         [7]	 http://slackermedia.info/handbook/doku.php?id=linuxaudio
that you’re sending sound to a nonactive sound device or to       [8]	 https://bugzilla.mozilla.org/show_bug.cgi?id=812900#c24
something that’s been muted.                                      [9] http://www.audacityteam.org/
                                                                  [10]	http://www.imdb.com/name/nm1244992
Linux plays sound!                                                [11]	http://people.redhat.com/skenlon
There are always going to be sound issues with com-
puters. Sound devices need drivers, operating systems
need to detect them and manage them, and users need               Author
to understand how the controls work. The key to seamless          Seth Kenlon is an independent multimedia artist, free culture
sound on your computer is to setup the sound devices              advocate, and UNIX geek. He has worked in the film [10]
when you first install your OS, confirm it’s working, and         and computing [11] industry, often at the same time. He is
then learn the tools the OS provides for you to control the       one of the maintainers of the Slackware-based multimedia
sound settings.                                                   production project, http://slackermedia.info




Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com                                                              79
OLD SCHOOL               ..............   ..
                                .............


 Happy 60                                                                          th
 birthday, Fortran                                                                                       BY BEN COTTON


  Fortran may be trending down on Google, but its foundational role in scientific applications ensure
  that it won’t be retiring anytime soon.




  THE FORTRAN                      COMPILER, introduced in April
                                   1957, was the first optimizing
  compiler, and it paved the way for many technical comput-
  ing applications over the years. What Cobol did for business
  computing, Fortran did for scientific computing.
     Fortran may be approaching retirement age, but that
  doesn’t mean it’s about to stop working. This year marks the
  60th anniversary of the first Fortran (then styled “FORTRAN,”
  for “FORmula TRANslation”) release.
     Even if you can’t write a single line of it, you use Fortran
  every day: Operational weather forecast models are still large-

                                                                          In the movie “Hidden Figures [1],” one of the characters
                                                                       teaches herself Fortran because she sees that human com-
  Although Fortran may not have                                        puters (including herself) will be replaced by electronic com-
  the same popular appeal as newer                                     puters. And although much from the early ‘60s has been left to
                                                                       history, Fortran persists. Two years ago, NASA began actively
  languages, those languages owe                                       seeking a Fortran programmer [2] to work on the Voyager mis-
                                                                       sions as the last original programmer prepared to retire. Use
  much to the pioneering work of the                                   of Fortran in weather and climate modeling, geophysics, and
  Fortran development team.                                            many other scientific applications means that Fortran knowl-
                                                                       edge will remain a valued skill for years to come.
  ly written in Fortran, for example. Its focus on mathematical           Despite this, Fortran is trending down in searches, and
  performance makes Fortran a common language in many                  it is no longer taught at some universities (I missed my
  high-performance computing
                                   The trend of “Fortran” as a Google search term from 2004 to 2017.
  applications, including com-
  putational fluid dynamics and
  computational chemistry. Al-
  though Fortran may not have
  the same popular appeal as
  newer languages, those lan-
  guages owe much to the pi-
  oneering work of the Fortran
  development team.



  80                                                                Open Source Yearbook 2017    . CC BY-SA 4.0 . O  pensource.com
                                                                     When a language is used in a critical
chance to take my university’s Fortran course by one se-
mester). One atmospheric scientist, preparing to apply
                                                                      business application, that gives it a
for graduate school in the late 2000s, decided she should            lot of staying power because wholly
learn a programming language. When she called local
schools and universities to ask whether they offered any            rewriting code is expensive and risky.
courses in Fortran, the response was laughter. So she
taught herself, by studying existing code and doing a lot of        ical business application, that gives it a lot of staying power
Google searches. Today, she maintains old Fortran code              because wholly rewriting code is expensive and risky.
and writes new code daily.                                             But there’s more to it than that. As the name implies, For-
   Such stories are becoming more prevalent as Fortran’s            tran is designed to translate mathematical formulas into
popularity declines. The great longevity of Fortran provides        computer code. That explains its strong presence in fields
a wealth of learning material as well as inter-generational         that deal with a lot of mathematical formulas (particularly
bonding. In my first system administration job, a common            partial differential equations and the like).
task was helping graduate students compile Fortran code                And like any technology that has survived the years, For-
they inherited from their advisor (who in turn inherited it from    tran has evolved. Changes in the language take advantage
their advisor, and so on…).                                         of new paradigms without making rapid changes. Since the
   A colleague of mine, who coincidentally began existing           first industry standard version of Fortran (FORTRAN 66, ap-
in 1954 (the year of the first draft of The IBM Mathematical        proved in 1966), only a few major versions have occurred:
Formula Translating System specification), wrote an article         FORTRAN 77 (approved in 1978), Fortran 90 (released in
sharing his experience creating a rendering of Da Vinci’s           1991 (ISO) and 1992 (ANSI)) and its update, Fortran 95,
“Mona Lisa” with Fortran [3]. Another friend told me one of         and Fortran 2003 (released in 2004) and its update, For-
his favorite programs as an undergraduate was a Fortran             tran 2008. A new revision called Fortran 2015 is expected
program that created a calendar featuring ASCII-art render-         in mid-2018.
ings of the characters from the “Peanuts” comic strip.                 Clearly, there’s no plan for Fortran to retire anytime soon.
                                                                    Active projects are underway to make it easier to run Fortran
A November 2017 calendar page generated by a Fortran program
                                                                    on GPUs [4]. Will Fortran celebrate its centennial? Nobody
                                                                    knows. But we do know that the Voyager 1 and Voyager 2
                                                                    spacecraft will carry Fortran code out beyond the reaches of
                                                                    our solar system.

                                                                    Links
                                                                    [1] http://www.imdb.com/title/tt4846340/
                                                                    [2]	
                                                                        http://www.popularmechanics.com/space/a17991/voyager-
                                                                        1-voyager-2-retiring-engineer/
                                                                    [3]	
                                                                        http://ezinearticles.com/?The-Genesis-of-Computer-Art-
                                                                        FORTRAN-(Backus)-a-Computer-Art-Medium-Creates-a-
                                                                        Mosaic-Mona-Lisa&id=513898
                                                                    [4]	
                                                                        https://www.nextplatform.com/2017/10/30/hybrid-fortran-
                                                                        pulls-legacy-codes-acceleration-era/
                                                                    [5]	
                                                                        http://www.cyclecomputing.com



                                                                    Author
                                                                    Ben Cotton is a meteorologist by training and a high-perfor-
                                                                    mance computing engineer by trade. Ben works as a tech-
                                                                    nical evangelist at Cycle Computing [5]. He is a Fedora user
                                                                    and contributor, co-founded a local open source meetup
                                                                    group, and is a member of the Open Source Initiative and
  What makes Fortran so enduring? Establishing an initial           a supporter of Software Freedom Conservancy. Find him on
foothold helps, of course. When a language is used in a crit-       Twitter (@FunnelFiasco) or at FunnelFiasco.com.




Open Source Yearbook 2017      . CC BY-SA 4.0 . O   pensource.com                                                               81
OLD SCHOOL                ..............   ..
                                 .............




 Perl                                               turns 30 and
                                                     its community
                                                     continues to thrive
                                                                                                                  BY RUTH HOLLOWAY

  Created for utility and known for its dedicated users, Perl has proven staying power.
  Here’s a brief history of the language and a look at some top user groups.




 LARRY WALL                   RELEASED PERL 1.0 to the comp.sources.
                              misc Usenet newsgroup on De-
  cember 18, 1987. In the nearly 30 years since then, both the
  language and the community of enthusiasts that sprung up
  around it have grown and thrivedand they continue to do so,
  despite suggestions to the contrary!
     Wall’s fundamental assertion there is more than one way
  to do itcontinues to resonate with developers. Perl allows
  programmers to embody the three chief virtues of a pro-
  grammer: laziness, impatience, and hubris. Perl was origi-
  nally designed for utility, not beauty. Perl is a programming
  language for fixing things, for quick hacks, and for making
  complicated things possible partly through the power of com-
  munity. This was a conscious decision on Larry Wall’s part:             the regular expression engine. Perl 3, in 1989, added support
  In an interview in 1999, he posed the question, “When’s the             for binary data streams. In March of 1991, Perl 4 was released,
  last time you used duct tape on a duct?”                                along with the first edition of Programming Perl [1], by Larry
                                                                          Wall and Randal L. Schwartz. Prior to Perl 4, the documen-
  A history lesson                                                        tation for Perl had been maintained in a single document,
                                                                          but the O’Reilly-published “Camel Book,” as it is called, con-
  Perl 1.0 - Perl 4.036                                                   tinues to be the canonical reference for the Perl language.
  Larry Wall developed the first Perl interpreter and language            As Perl has changed over the years, Programming Perl has
  while working for System Development Corporation, later                 been updated, and it is now in its fourth edition.
  a part of Unisys. Early
  releases focused on the                                                                      Early Perl 5
  tools needed for the sys-                                                                    Perl 5.000, released on October 17,
  tem engineering problems
                                  Perl allows programmers to                                   1994, was a nearly complete rewrite
  that he was trying to solve.    embody the three chief virtues                               of the interpreter. New features in-
  Perl 2’s release in 1988                                                                     cluded objects, references, lexical
  made improvements on
                                  of a programmer: laziness,                                   variables, and the use of external,
                                  impatience, and hubris.
  82                                                                   Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
reusable modules. This new modularity provides a tool                  him wearing a broad-brimmed hat and bold print shirts;
for growing the language without modifying the underly-                he’s hard to miss even in a crowd as eclectic as the Perl
ing interpreter. Perl 5.004 introduced CGI.pm, which con-              community.
tributed to its use as an early scripting language for the             Larry Wall, creator of Perl
internet. Many Perl-driven internet applications and sites             (photo credit: Chris Jack, with permission: CC-BY-SA 4.0)
still in use today emerged about this time, including IMDB,
Craigslist, Bugzilla, and cPanel.

Modern Perl 5
Perl version 5.10 of Perl was released on the 20th anniver-
sary of Perl 1.0: December 18, 2007. Version 5.10 marks
the start of the “Modern Perl” movement. Modern Perl is a
style of development that takes advantage of the newest
language features, places a high importance on readable
code, encourages testing, and relies heavily on the use of
the CPAN ecosystem of contributed code. Development of
Perl 5 continues along more modern lines, with attention in
recent years to Unicode compatibility, JSON support, and
other useful features for object-oriented coders.

Perl 6
On July 19, 2000, Larry Wall announced at the Perl Confer-
ence that he was interested in working on Perl 6, a redesign
of the language, with the goal of removing historical warts
from the language. Fifteen years later, in December of 2015,           The son and grandson of pastors, Larry is himself a Chris-
Perl 6 1.0 was released. Perl 6 is not backward-compatible             tian. That heritage of ideas informs some of his work and
with Perl 5, and although intended as a replacement, the               advocacy on Perl, including the “idea that other people are
Perl 6 team is in no great hurry for Perl 5 to be obsolete.            important.” He and his wife both attended graduate school
“As for whether Perl 6 will replace Perl 5, yeah, probably, in         in linguistics at Berkeley and UCLA and were planning to
about 40 years or so,” Larry Wall said in an InfoWorld inter-          become missionaries, but they were forced to give up that
view [2] in 2015. There have been starts and stops with sev-           dream for health reasons. Wall said in a 1999 Linux Journal
eral implementations of Perl 6, but only one remains under             interview [5], “Funny thing is, now the missionaries probably
active development: Rakudo Perl 6 [3]. Because Perl 6 is               get more good out of Perl than they’d have gotten out of me
solely a specification (unlike all prior Perl), it is possible that    as a missionary. Go figure.”
many implementations could emerge. As the original design                 I was privileged to moderate a Q&A session featuring Lar-
documents [4] state, “Perl 6 is anything that passes the offi-         ry at YAPC::NA 2016, in Orlando, Florida, and got to spend
cial test suite.”                                                      time with him and Gloria. After that meeting, I am honored to
                                                                       call them both my friends. If you ever get a chance to spend
The Perl community                                                     time talking to this amazing couple, do so; your life will be
I have heard it said many times in the years I have been in-           enriched by the experience.
volved in the Perl community: Perl is about people. The peo-
ple who create, maintain, support, and use Perl jointly create         Perl 5 Porters
an environment where developers can learn and thrive, each             In May of 1994, the Perl 5 Porters email list was founded as
working on the things that interest them.                              a place to coordinate work on porting Perl 5 to different plat-
                                                                       forms. P5P, as it is now known, is the primary mailing list for
Larry Wall, the man who started it all                                 discussion about maintenance and development of the stan-
At the center of it all, of course, is Larry Wall. Larry and           dard distributions of Perl. A number of the “porters” are active
his wife Gloria travel all over the world to Perl and other            on IRC [6] as well. The current overseer of this process is
technical events. When I first joined the Perl communi-                called the “Pumpking” or the “Holder of the Pumpkin. [7]” The
ty, there seemed to be a bit of hero worship going on                  current Pumpking is Sawyer X [8], who is also involved in the
around him, but Larry does not particularly enjoy that as-             Dancer project [9], which I wrote about a couple of years ago
pect of his notoriety. He’s a kind, soft-spoken, brilliant             on Opensource.com [10]. P5P discussions can be energetic
man who enjoys coding and the community that has de-                   at times; there are a lot of talented people in there, many of
veloped around his work. These days you’ll usually see                 whom have strong opinions. If you’re looking for knowledge



Open Source Yearbook 2017       . CC BY-SA 4.0 . O     pensource.com                                                                83
OLD SCHOOL               ..............   ..
                                .............
  about the core workings of Perl, though, P5P is where that       lated things; some include suggestions and ideas for new
  magic is wrought.                                                features.

  Sawyer X, Perl 5 Pumpking
                                                                   CPAN
  (photo credit: Chris Jack, with permission: CC-BY-SA 4.0)
                                                                   Perl, like many other languages, is modular; new capa-
                                                                   bilities can be created and installed without having to up-
                                                                   date the core interpreter. The Comprehensive Perl Archive
                                                                   Network [16], founded in 1993 and online since October
                                                                   1995, was created to help unify the assortment of scat-
                                                                   tered archives of Perl modules. The repository is mirrored
                                                                   on more than 250 servers around the world, and it cur-
                                                                   rently contains almost 200,000 modules from over 13,000
                                                                   authors. New releases of module distributions are upload-
                                                                   ed daily [17]. One of CPAN’s interesting artifacts is the
                                                                   Acme:: namespace. Acme:: is the area of CPAN reserved
                                                                   for experiments, entertaining-but-useless modules, frivo-
                                                                   lous, or trivial ideas. An article on Opensource.com [18]
                                                                   from 2016 looked at a few of these modules just for fun.
                                                                   You can search the CPAN at MetaCPAN [19] for anything
                                                                   you might need.

                                                                   The Perl Foundation
                                                                   In 1999, Kevin Lenzo founded the “Yet Another Society,”
                                                                   which has become known as The Perl Foundation [20]. The
                                                                   original intent was to assist in grassroots efforts for events
                                                                   in the North American Perl Conferences, including banking
                                                                   and organizational needs. The focus has since shifted, and
  Perl Mongers                                                     TPF now offers grants for extending and improving both
  In 1997, a group of Perl enthusiasts from the New York City      Perl 5 and Perl 6. The Perl Foundation also awards the
  area met at the first O’Reilly Perl Conference (which later      White Camel [21], in recognition of significant non-code
  became OSCON), and formed the New York Perl Mongers,             contributions to the Perl community.
  or NY.pm [11]. The “.pm” suffix for Perl Mongers groups is
  a play on the fact that shared-code Perl files are suffixed      YAPC Europe Foundation
  .pm, for “Perl module.” The Perl Mongers [12] organization       The YEF [22] was formed in 2003 to help grow the Euro-
  has, for the past 20 years, provided a framework for the         pean Perl community, primarily through public events. The
  foundation and nurturing of local user groups all over the       YEF supports local Perl Mongers groups in efforts to sponsor
  world and currently boasts of 250 Perl monger groups.            conferences through providing online payment and registra-
  Individual groups, or groups working as a team, sponsor          tion system and kickstart donations. Their efforts support fre-
  and host conferences, hackathons, and workshops from             quent workshops and hackathons in Europe, as well as the
  time to time, as well as local meetings for technical and        annual Perl Conference.
  social discussions.
                                                                   Japan Perl Association
  PerlMonks                                                        The Japan Perl Association [23] helps promote Perl technol-
  Have a question? Want to read the wisdom of some of              ogy and culture in Asia through advocacy and sponsorship
  the gurus of Perl? Check out PerlMonks [13]. You’ll find         of the annual YAPC::Asia conference, frequently the world’s
  numerous tutorials, a venue to ask questions and get an-         largest conference on Perl. For many years, the conference
  swers from the community, along with lighthearted bits           was held in Tokyo, but it has recently started moving to other
  about Perl and the Perl community. The software that             locations in Japan.
  drives PerlMonks is getting a little long in the tooth, but
  the community continues to thrive, with new posts daily          Enlightened Perl Organisation
  and a humorous take on the religious fervor that developers      Working in parallel with The Perl Foundation, the Enlight-
  express about their favorite languages. As you participate,      ened Perl Organisation [24] works to support Perl proj-
  you gain points and levels [14]. The Meditations [15] con-       ects that help Perl remain an enterprise-grade platform
  tains discussions about Perl, hacker culture, or other re-       for development. EPO focuses its attention on code, tool-



  84                                                            Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com
chain elements, documentation, promotional materials,             Links
and tutorials that make corporate adoption of Perl easier.        [1]	 http://shop.oreilly.com/product/9780596004927.do
In addition to sponsorship of the London Perl Workshop            [2] https://www.infoworld.com/article/3017418/application-
and the Strawberry Perl [25] initiative, the Enlightened                development/developers-can-unwrap-perl-6-on-christmas.html
Perl Organisation has provided substantial funding for the        [3]	 http://rakudo.org/
CPAN Testers. The Testers are a group of developers who           [4] http://design.perl6.org/S01.html
test CPAN modules against many versions of Perl, on nu-           [5]	 https://www.linuxjournal.com/article/3394
merous OS platforms. The EPO also sponsors a Send-A-              [6] http://www.irc.perl.org/#p5p
Newbie program, providing funding for first-time attendees        [7]	 http://perldoc.perl.org/perlhist.html#PUMPKIN%3f
to Perl conferences.                                              [8]	 https://twitter.com/perlsawyer
                                                                  [9]	 http://perldancer.org
YAPC and the Perl Conferences                                     [10]	https://opensource.com/business/15/9/taking-spin-dancer-
The first O’Reilly Perl Conference was held in 1997. In 1999,           lightweight-perl-web-application-framework
O’Reilly added additional open source content to the pro-         [11]	https://www.meetup.com/The-New-York-Perl-Meetup-Group/
gram, and that conference became known as OSCON [26].             [12] https://www.pm.org
The first Yet Another Perl Conference [27] was held in June       [13] http://perlmonks.org
of that year, in Pittsburgh, and has been held in North Amer-     [14] http://perlmonks.org/?node=Levels%20of%20Monks
ica every year since. Additional similar conferences were or-     [15]	http://perlmonks.org/?node=Meditations
ganized in Europe starting in 2000, in Israel since 2003, in      [16]	http://cpan.org
Australia since 2004, in Asia and Brazil since 2005, and in       [17]	https://metacpan.org/recent
Russia since 2008.                                                [18] https://opensource.com/life/16/10/trick-or-treat-funny-perl-
   The name “The Perl Conference” is owned by O’Reilly,                 modules
but in 2016, it was announced that an agreement had been          [19] http://metacpan.org
reached to allow use of the name for the YAPC conferences,        [20]	http://perlfoundation.org
beginning with the 2017 conferences. At each conference,          [21]	https://www.perl.org/advocacy/white_camel/
speakers present on Perl and other development-related            [22]	http://yapceurope.org/
topics, and there are usually educational workshops before        [23]	http://japan.perlassociation.org/
or after the conference. The North American and European          [24]	https://enlightenedperl.org
conferences generally include 300-400 attendees. The con-         [25] http://strawberryperl.com
ferences usually have content for both new Perl developers        [26]	https://conferences.oreilly.com/oscon
and substantial opportunities for core developers and other       [27]	http://yapc.org
community members to interact and collaborate as well as          [28]	http://numbersonthespines.com
present on their own work.
                                                                  Author
A tried-and-true technology…and so much more                      Ruth Holloway has been a system administrator and software
As Perl turns 30, the community that emerged around Lar-          developer for a long, long time, getting her professional start on
ry Wall’s solution to sticky system administration problems       a VAX 11/780, way back when. She spent a lot of her career
continues to grow and thrive. New developers enter the            (so far) serving the technology needs of libraries, and has been
community all the time, and substantial new work is being         a contributor since 2008 to the Koha open source library au-
done to modernize the language and keep it useful for solv-       tomation suite. Ruth is currently a Perl Developer at cPanel in
ing a new generation of problems. Interested? Find your           Houston, and also serves as chief of staff for an obnoxious cat.
local Perl Mongers group, or join us online, or attend a Perl     In her copious free time, she occasionally reviews old romance
Conference near you!                                              novels on her blog [28], and is working on her first novel.




Open Source Yearbook 2017    . CC BY-SA 4.0 . O   pensource.com                                                                  85
OLD SCHOOL              ..............   ..
                               .............



  The origin and evolution
  of FreeDOS                                                                                         BY JIM HALL


  Or, why a community formed around an open source
  version of DOS, and how it’s still being used today.




 I GREW UP                   in the 1970s and 1980s. My parents
                             wanted to expose my brother and me
  to computers from an early age, so they bought an Apple II
                                                                      features to the MS-DOS command line. Like many others at
                                                                      the time, I also created utilities that replaced and enhanced
                                                                      the MS-DOS command line.
  clone called the Franklin Ace 1000. I’m sure the first thing           The university had a computer lab, and I got an account
  we used it for was playing games. But it didn’t take long be-       there on the VAX and Unix systems. I really liked Unix. The
  fore we asked, “How does it work?” Our parents bought us            command line was similar to MS-DOS, but more powerful.
  a book about how to program                                                                            I learned to use Unix when
  in Applesoft BASIC, and we                                                                             I was in the computer labs,
  taught ourselves.                                                                                      but I still used MS-DOS on
     I remember my first pro-                                                                            my personal computer. By
  grams were pretty standard                                                                             running MS-DOS, I could
  stuff. Eventually I developed                                                                          use my favorite programs to
  a fondness for creating sim-                                                                           write papers and help ana-
  ulations and turn-based                                                                                lyze lab data.
  games. For example, my                                                                                    I discovered the concept
  friends and I played Dun-                                                                              of “shareware” programs,
  geons and Dragons in our                                                                               which let you try a program
  spare time, and I wrote sev-                                                                           for free. If you found the pro-
  eral D&D-style games. A fa-                                                                            gram useful, you registered
  vorite hobby was recreating                                                                            it by sending a check to the
  the computer readouts from television shows and movies.             program’s author. I thought shareware was a pretty neat
  Perhaps my largest effort was a program, based on the               idea, and I found MS-DOS shareware programs that filled
  1983 movie WarGames, that let you “play” global thermo-             my needs. For example, I switched from WordPerfect to the
  nuclear war.                                                        shareware GalaxyWrite word processor to write papers. I
     Later, we replaced the Apple with an IBM PC. The BA-             used AsEasyAs to do spreadsheet analysis and Telix to dial
  SIC environment on DOS was different from Applesoft BA-             into the university’s computer lab when I needed to use a
  SIC, but I figured it out easily enough. I continued writing        Unix system.
  programs on it throughout my junior high and high school               In 1993, I learned about a Unix system that I could
  years.                                                              run on my home computer for free. This “Linux” system
     In 1990, I became an undergraduate physics student at            seemed just as powerful as the university’s Unix systems,
  the University of Wisconsin—River Falls. Even though my             but now I could run everything on my home computer. I
  major was physics, I continued to write programs. I learned         installed Linux on my PC, dual-booted with MS-DOS. I
  the C programming language and picked up a C compiler. I            thought Linux was neat and I used it a lot, but still spent
  wrote lots of utilities to help me analyze lab data or add new      most of my time in MS-DOS. Because let’s face it: In 1993,



  86                                                               Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
 there were a lot more applications and games on MS-DOS                  ciate direct email to me. If you just want to dis-
 than on Linux.                                                          cuss the merits or morals of writing a PD-DOS, I’ll
                                                                         leave that to the net. I’ll check in from time to time
 How FreeDOS started                                                     to see how the discussion is going, and maybe
 Because MS-DOS was my favorite operating system, I                      contribute a little to what promises to be a very
 had built up this library of utilities I’d written to add new           polarized debate!
                                           functionality to MS-
                                           DOS. I just thought           I am excited about PD-DOS, and I am hoping I
                                           DOS was a great               can get a group started!
In 1993, I learned about                   operating system.
a Unix system that I                       I’d used Windows              —James Hall
                                           by this point—but if
could run on my home                       you remember the              PS—of course, if this already exists, please point
computer for free.                         era, you know Win-            me to the group leader so I can at least contribute!
                                           dows 3.1 wasn’t a
 great platform. I preferred doing my work at the command           Developers contacted me almost immediately. We had all
 line, not with a mouse.                                            written our own MS-DOS extensions, power tools that ex-
    In early 1994, I started seeing a lot of interviews with        panded what you could do on the MS-DOS command line.
 Microsoft executives in tech magazines saying the next             We pooled our utilities and looked on public FTP sites for
 version of Windows would totally do away with MS-DOS. I            public domain source code to other programs that replicated
 looked at Windows 3.1 and said, “If Windows 3.2 or Win-            the features of MS-DOS.
 dows 4.0 will be anything like Windows 3.1, I want nothing           A note about the name: When I started the project, I
 to do with it.”                                                    didn’t fully understand the nuances between “free software”
    Having experience with Linux, I thought, “If developers         and “public domain.” I assumed they were the same. And
 can come together over the internet to write a complete Unix       certainly, many of the free tools we found on FTP sites were
 operating system, surely we can do the same thing with             released into the public domain. I adopted the name PD-
 DOS.” After all, DOS was a fairly straightforward operating        DOS for Public Do-
 system compared to Unix. DOS ran one task at a time (aka           main DOS. It took
 single-tasking) and had a simpler memory model. I’d already        only a few weeks
 written a number of utilities that expanded the MS-DOS com-        before I realized
                                                                                                  When I started the
 mand line, so I had a head start.                                  we wanted the pro-           project, I didn’t fully
    I asked around the comp.os.msdos.apps discussion group          tection of the GNU
 on Usenet. Although others were interested in a free DOS,          General Public Li-       understand the nuances
 no one wanted to start such a project. So, I volunteered to        cense, which would       between “free software”
 do it.                                                             make our DOS proj-
    On June 29, 1994, I posted this to comp.os.msdos.apps:          ect a “free software”       and “public domain.”
                                                                    project. By late July,
      ANNOUNCEMENT OF PD-DOS PROJECT:                               we changed the name to Free-DOS. Later, we dropped the
                                                                    hyphen to become FreeDOS.
      A few months ago, I posted articles relating to
      starting a public domain version of DOS. The                  How FreeDOS is used today
      general support for this at the time was strong,              Over the years developers have shared with me how they
      and many people agreed with the statement,                    use FreeDOS to run embedded systems. My all-time favor-
      “start writing!” So, I have…                                  ite example is a developer who used FreeDOS to power a
                                                                    pinball machine. FreeDOS ran an application that controlled
      Announcing the first effort to produce a PD-                  the board, tallied the score, and updated the back display.
      DOS. I have written up a “manifest” describing                I don’t know exactly how it was built, but one way such a
      the goals of such a project and an outline of the             system could work is to have every bumper register a “key”
      work, as well as a “task list” that shows exactly             on a keyboard bus and the application simply read from that
      what needs to be written. I’ll post those here, and           input. I thought it was cool.
      let discussion follow.                                           People sometimes forget about legacy software, but it
                                                                    pops up in unexpected places. I used to be campus CIO
      If you are thinking about developing, or have                 of a small university, and once a faculty member brought in
      ideas or suggestions for PD-DOS, I would appre-               some floppy disks with old research data on them. The data



 Open Source Yearbook 2017     . CC BY-SA 4.0 . O   pensource.com                                                                 87
OLD SCHOOL              ..............   ..
                               .............
  wasn’t stored in plaintext files, rather as DOS application          port from your old finance system? Just install your leg-
  data. None of our modern systems would read the old data             acy software under FreeDOS, and you’ll be good to go.
  files, so we booted a spare PC with FreeDOS, downloaded           3. T
                                                                        o develop embedded systems: Many embedded sys-
  a shareware DOS program that could read the application              tems run on DOS, although modern systems are more
  data, and exported the data to plaintext.                            likely to run on Linux. If you support an older embedded
      There are other examples of legacy software running on           system, you might be running DOS, and FreeDOS can fit
  DOS. My favorite is the McLaren F1 supercar [1], which can           in very well.
  only be serviced with an ancient DOS laptop. And Game of
  Thrones author George R.R. Martin uses DOS to write his           You can probably add a fourth category to those FreeDOS
  books.                                                            use cases: updating BIOS. I get a lot of email and comments
                                                                    from people who still boot FreeDOS to update the BIOS in
  George R. R. Martin Still Uses A DOS Word Processor [2]
                                                                    their computer system. DOS is still a safe way to do that.
                                                                       It’s true that you don’t see much DOS in embedded sys-
                                                                    tems being developed today. I think the Raspberry Pi [3] and
                                                                    other low-cost and low-power devices have made Linux in
                                                                    embedded devices very attractive, so most developer interest
                                                                    has moved there. But you still see FreeDOS sometimes, a
                                                                    testament to the staying power of open source development.

                                                                    Links
                                                                    [1]	
                                                                        http://jalopnik.com/this-ancient-laptop-is-the-only-key-to-
                                                                        the-most-valuabl-1773662267
                                                                    [2]	
                                                                        https://www.youtube.com/watch?time_
                                                                        continue=25&v=X5REM-3nWHg
    They probably use MS-DOS, but I believe there are a             [3]	
                                                                        https://opensource.com/resources/what-raspberry-pi
  bunch of other legacy systems running on FreeDOS.
    A few years ago, we ran a survey to see how people use          Author
  FreeDOS, and three different ways emerged:                        Jim Hall is an open source software developer and advo-
                                                                    cate, probably best known as the founder and project coor-
  1. T
      o play classic DOS games: You can play your favorite         dinator for FreeDOS. Jim is also very active in the usability
     DOS games on FreeDOS. And there are a lot of great             of open source software, as a mentor for usability testing in
     classic games to play: Wolfenstein 3D, Doom, Com-              GNOME Outreachy, and as an occasional adjunct professor
     mander Keen, Rise of the Triad, Jill of the Jungle, Duke       teaching a course on the Usability of Open Source Software.
     Nukem, and many others.                                        From 2016 to 2017, Jim served as a director on the GNOME
  2. T
      o run legacy software: Need to recover data from an          Foundation Board of Directors. At work, Jim is Chief Informa-
     old business program? Or maybe you need to run a re-           tion Officer in local government.




  88                                                            Open Source Yearbook 2017     . CC BY-SA 4.0 . O    pensource.com
                                                                               ....................     OLD SCHOOL




How to run DOS
programs in Linux
                                                                  BY JIM HALL


QEMU and FreeDOS make it easy to run old DOS programs under Linux.




THE CLASSIC DOS                       operating system sup-
                                      ported a lot of great
applications: word processors, spreadsheets, games, and
                                                                “guest” operating system Linux. Most popular Linux sys-
                                                                tems include QEMU by default.
                                                                  Here are four easy steps to run old DOS applications un-
other programs. Just because an application is old doesn’t      der Linux by using QEMU and FreeDOS.
mean it’s no longer useful.
   There are many reasons                                                                       Step 1: Set up a
to run an old DOS applica-                                                                      virtual disk
tion today. Maybe to extract                                                                    You’ll need a place to install
a report from a legacy busi-                                                                    FreeDOS inside QEMU, and
ness application. Or to play                                                                    for that you’ll need a virtual
a classic DOS game. Or just                                                                     C: drive. In DOS, drives are
because you are curious                                                                         assigned with letters—A:
about “classic computing.”                                                                      and B: are the first and sec-
You don’t need to dual-boot                                                                     ond floppy disk drives and
your system to run DOS pro-                                                                     C: is the first hard drive.
grams. Instead, you can run                                                                     Other media, including oth-
them right inside Linux with                                                                    er hard drives or CD-ROM
the help of a PC emulator and FreeDOS [1].                      drives, are assigned D:, E:, and so on.
   FreeDOS is a complete, free, DOS-compatible operat-             Under QEMU, virtual drives are image files. To initialize
ing system that you can use to play classic DOS games,          a file that you can use as a virtual C: drive, use the qemu-
run legacy business software, or develop embedded sys-          img command. To create an image file that’s about 200MB,
tems. Any program that works on MS-DOS should also run          type this:
on FreeDOS.
   In the “old days,” you installed DOS as the sole operat-     qemu-img create dos.img 200M
ing system on a computer. These days, it’s much easier
to install DOS in a virtual machine running under Linux.        Compared to modern computing, 200MB may seem small,
QEMU [2] (short for Quick EMUlator) is an open source           but in the early 1990s, 200MB was pretty big. That’s more
software virtual machine system that can run DOS as a           than enough to install and run DOS.



Open Source Yearbook 2017   . CC BY-SA 4.0 . O  pensource.com                                                              89
OLD SCHOOL             ..............   ..
                              .............
  qemu-system-i386 QEMU can emulate several differ-                  Step 2: QEMU options
                   ent systems, but to boot DOS, we’ll               Unlike PC emulator systems like VMware or VirtualBox,
                   need to have an Intel-compatible                  you need to “build” your virtual system by instructing
                   CPU. For that, start QEMU with the                QEMU to add each component of the virtual machine. Al-
                   i386 command.                                     though this may seem laborious, it’s really not that hard.
  -m 16                 I like to define a virtual machine with      Here are the parameters I use to boot FreeDOS inside
                        16MB of memory. That may seem                QEMU:
                        small, but DOS doesn’t require much
                        memory to do its work. When DOS              Step 3: Boot and install FreeDOS
                        was king, computers with 16MB or
                                                                     Now that QEMU is set up to run a virtual system, we need a
                        even 8MB were quite common.
                                                                     version of DOS to install and boot inside that virtual computer.
  -k en-us              Technically, the -k option isn’t             FreeDOS makes this easy. The latest version is FreeDOS 1.2,
                        necessary, because QEMU will set             released in December 2016.
                        the virtual keyboard to match your              Download the FreeDOS 1.2 distribution from the FreeDOS
                        actual keyboard (in my case, that’s          website [3]. The FreeDOS 1.2 CD-ROM “standard” installer
                        English in the standard U.S. layout).        (FD12CD.iso) will work great for QEMU, so I recommend
                        But I like to specify it anyway.             that version.
  -rtc base=localtime Every classic PC provides a real                  Installing FreeDOS is simple. First, tell QEMU to use the
                      time clock (RTC) so the system can             CD-ROM image and to boot from that. Remember that the
                      keep track of time. I find it’s easiest        C: drive is the first hard drive, so the CD-ROM will show up
                      to simply set the virtual RTC to               as the D: drive.
                      match your local time.
  -soundhw              If you need sound, especially for            qe
                                                                       mu-system-i386 -m 16 -k en-us -rtc base=localtime -soundhw
  sb16,adlib,pcspk      games, I prefer to define QEMU                 sb16,adlib -device cirrus-vga -display gtk -hda dos.img
                        with SoundBlaster16 sound hard-                -cdrom FD12CD.iso -boot order=d
                        ware and AdLib Music support.
                        SoundBlaster16 and AdLib were the            Just follow the prompts, and you’ll have FreeDOS installed
                        most common sound hardware in                within minutes.
                        the DOS era. Some older programs
                        may use the PC speaker for sound;
                        QEMU can also emulate this.
  -device cirrus-vga To use graphics, I like to emulate a
                     simple VGA video card. The Cirrus
                     VGA card was a common graphics
                     card at the time, and QEMU can
                     emulate it.
  -display gtk          For the virtual display, I set QEMU
                        to use the GTK toolkit, which puts
                        the virtual system in its own window
                        and provides a simple menu to
                        control the virtual machine.
  -boot order=          You can tell QEMU to boot the virtu-
                        al machine from a variety of sourc-
                        es. To boot from the floppy drive
                        (typically A: on DOS machines)
                        specify order=a. To boot from the
                        first hard drive (usually called C:)
                        use order=c. Or to boot from a
                        CD-ROM drive (often assigned D:
                        by DOS) use order=d. You can
                        combine letters to specify a specific
                        boot preference, such as order=dc
                        to first use the CD-ROM drive, then
                        the hard drive if the CD-ROM drive
                        does not contain bootable media.




  90                                                              Open Source Yearbook 2017    . CC BY-SA 4.0 . O    pensource.com
                                                                    As-Easy-As spreadsheet program




After you’ve finished, exit QEMU by closing the window.

Step 4: Install and run your DOS application
Once you have installed FreeDOS, you can run different              I also like to boot FreeDOS under QEMU to play some of my
DOS applications inside QEMU. You can find old DOS                  favorite DOS games, like the original Doom. These old games
programs online through various archives or other web-              are still fun to play, and they all run great under QEMU.
sites [4].
                                                                    Doom
   QEMU provides an easy way to access local files on Li-
nux. Let’s say you want to share the dosfiles/ folder with
QEMU. Simply tell QEMU to use the folder as a virtual FAT
drive by using the -drive option. QEMU will access this fold-
er as though it were a hard drive.

-drive file=fat:rw:dosfiles/


Now, start QEMU with your regular options, plus the extra
virtual FAT drive:

qe
  mu-system-i386 -m 16 -k en-us -rtc base=localtime -soundhw
  sb16,adlib -device cirrus-vga -display gtk -hda dos.img
  -drive file=fat:rw:dosfiles/ -boot order=c


Once you’re booted in FreeDOS, any files you save to the
D: drive will be saved to the dosfiles/ folder on Linux. This
                                                                    Heretic
makes reading the files directly from Linux easy; howev-
er, be careful not to change the dosfiles/ folder from Li-
nux after starting QEMU. QEMU builds a virtual FAT table
once, when you start QEMU. If you add or delete files in
dosfiles/ after you start QEMU, the emulator may become
confused.
   I use QEMU like this to run my favorite DOS programs, like
the As-Easy-As spreadsheet program. This was a popular
spreadsheet application from the 1980s and 1990s, which
does the same job that Microsoft Excel and LibreOffice Calc
fulfill today, or that the more expensive Lotus 1-2-3 spread-
sheet did back in the day. As-Easy-As and Lotus 1-2-3 both
saved data as WKS files, which newer versions of Microsoft
Excel cannot read, but which LibreOffice Calc may still sup-
port, depending on compatibility.




Open Source Yearbook 2017       . CC BY-SA 4.0 . O  pensource.com                                                           91
OLD SCHOOL             ..............   ..
                              .............
  Jill of the Jungle
                                                 QEMU and FreeDOS make it easy to run old DOS programs
                                                 under Linux. Once you’ve set up QEMU as the virtual ma-
                                                 chine emulator and installed FreeDOS, you should be all set
                                                 to run your favorite classic DOS programs from Linux.
                                                 All images courtesy of FreeDOS.org [5].

                                                 Links
                                                 [1]	http://www.freedos.org/
                                                 [2]	https://www.qemu.org/
                                                 [3] http://www.freedos.org/
                                                 [4] http://www.freedos.org/links/
                                                 [5] http://www.freedos.org/	



                                                 Author
                                                 Jim Hall is an open source software developer and advo-
                                                 cate, probably best known as the founder and project coor-
  Commander Keen
                                                 dinator for FreeDOS. Jim is also very active in the usability
                                                 of open source software, as a mentor for usability testing in
                                                 GNOME Outreachy, and as an occasional adjunct professor
                                                 teaching a course on the Usability of Open Source Software.
                                                 From 2016 to 2017, Jim served as a director on the GNOME
                                                 Foundation Board of Directors. At work, Jim is Chief Informa-
                                                 tion Officer in local government.




  92                                          Open Source Yearbook 2017     . CC BY-SA 4.0 . O pensource.com
                           .  ........
                             ... .. ...
O P E N S O U R C E . C O M.. .. .. ....
                                        . .
EDITORIAL CALENDAR
                                          ..............   ..
                                                 .............
           Would you like to write for us?
           Our editorial calendar includes upcoming themes, community columns, and topic
           suggestions: https://opensource.com/calendar

           Happy Pi Day!
           To celebrate Pi Day, we're rounding up a series on the Raspberry Pi. What projects have you created?
           What solutions to common problems have you found? What do you do with your Raspberry Pi?

           Containers
           How are you or your organization using Linux containers to get work done, to push innovation forward,
           and to find new solutions to technical problems?

           Open Hardware and DIY
           Show off your tutorials and demos of hardware in the wild, and tell us about projects you work on and
           how you use open hardware. Let's see those DIY projects that automate your appliances and up your
           geek fashion cred.

           Entertainment and Geek Culture
           We're looking for geek culture stories and articles about how open source tools, projects, and communities
           keep us entertained.

           Back to School
           Which open source tools are helpful for the classroom? How are open source technologies being used or
           taught in your schools? We're always excited to hear how open source is improving education, so send
           us your stories.

           Programming
           Show off your scripts, tips for getting started, tricks for developers, and tutorials, and tell us about your
           favorite programming languages and communities.

           Kubernetes, Automation, Machine Learning, Artificial Intelligence, DevOps, and more
           We want to hear your stories about the practical tools and trends affecting your work, as well as the up
           and coming technologies you’re learning and using. Send us your article ideas!


                                                      Email story proposals to open@opensource.com




  Open Source Yearbook 2017   . CC BY-SA 4.0 . O   pensource.com                                                           93