I had heard about IPython on my Twitter feed from people like
Titus Brown but only in the context
of IPython Notebooks for sharing data and calculations. I had been
meaning to look into it because I figured at some point someone would
want our group at work to do something about sharing how to do
scientific analyses, etc. and that knowing IPython might come in
handy. What I didn’t know is that you can use IPython - without the
notebook component - as a replacement for the standard python REPL,
kind of like how lots of people use pry instead of Ruby’s built in
irb.
I wish I had looked into this sooner. IPython is more like the python
REPL I needed when I was starting out in Python. I haven’t had a
chance to use it a lot - for one thing I installed IPython in my
virtual environment and my current environment doesn’t really have
anything in it to play with. But already I am hopeful.
The main thing that makes running Python in a shell inside emacs so
painful is that copying code and pasting into the emacs window makes
the indenting go all weird. Can ipython fix that? I am not sure. But
it does have a %paste macro. Does that help? Perhaps it helps for
pasting with the paste buffer. But my usual emacs workflow of copying
a chunk of code into my kill ring and then yanking it into the shell
is still broken due to whitespace issues (unless of course the code is
actually not indented, which rarely happens). Perhaps
%run
will help - at least to get some code into my REPL so I can play with it.
ipython has a nice readline history feature. One nice option is being
able to get the output from the last 3 commands you ran with: _, __, and ___.
I am also very impressed with being able to run system commands and
assign the output to python variables, e.g. %cd /Users/me myfiles = %ls .bash*
Or something like:
To get specific information on an object, you can use the magic
commands %pdoc, %pdef, %psource and ```%pfile`` OMG!
This is like pry’s cd into an object and look around. Instructions for
using these magic commangs as part of some other python program
are in the docs. This seems to me to be a lot like binding.pry to
start a REPL somehwhere.
Since %automagic is turned ON by default, you can often just do
ls or cd without the leading %. You can toggle the
automagic by typing %automagic until you get what you want.
To see what things are included in automagic:
%magic -brief
The IPython.lib.deepreload module allows you to recursively
reload a module: changes made to any of its dependencies will be
reloaded without having to exit. To start using it, do:
Misc
Automatic quoting
The automatic quoting
sounded useful but when I played around with it, it was odd. All I
really want is an equivalent of Ruby’s %w(foo bar baz) so I don’t have
to type a boatload of “ and , to get ["foo", "bar", "baz"].
Unfortunately this is not that feature.
Spying on IPython
Or “How IPython keeps track of what you have running where”.
When the ipython notebook started up, it created the nbserver-###.json
file. When I started actually running code in an .ipynb file, then it
needed a running kernel - so it spun one up and created the
kernel=###.json file
This part of the docs has lots of useful information
about how ipython processes communicate with the kernel - message
passing, security (only bind to localhost OR use ssh tunnels to
encrypt trafic and keep it private).
For our new project we are trying to use Python 3, so I want to
evaluate IPython on the latest version of Python, which currently is
3.4.3. I am using pyenv and
pyvirtualenv to manage my
python installs. So with those all set up, I installed python 3.4.3
and created a virtualenv to use for ipython.
I haven’t used IPython before - or should I say Jupyter since that is
the project’s new name. So I watched a couple of the introductory
videos I found on YouTube. This video
told me I needed to add the python 3 kernel to jupyter, so I tried to
do that but kept getting messages about missing pieces every time I
tried to run ipython.
What I had overlooked was an install target that would install not
only ipython but all the things it typically depends on / uses. Once I
installed ‘all’ I got a usable ipython.
Exporting from IPython to other formats
One of the things we are most interested in for work is the export
formats that IPython / Jupyter supports so I tried out all the export
formats. The PDF renderer was looking for pandoc. According to the
pandoc install instructions,
before I can install pandoc, I need a version of LaTeX for the
Mac. So, I installed MacTeX from their
MacTeX.pkg. And then I installed pandoc with homebrew: brew install
pandoc. I still can’t export PDFs from the drop down menu in the
iPython web interface:
However, from the command line, I can convert my test document to
latex:
produces the file CNKTest.tex which I can open and view in Latexit
(which came from MacTeX).
Matplotlib
Some of the things I am playing aroud with want matplotlib - which
does not appear to be installed in my virtualenv. So I ran pip
install matplotlib which says it did:
And now I can load matplotlib inside a running ipython session with
%matplotlib (or I could load it when I start the session using
the command line argument: ipython --matplotlib
I guess I am old school but I prefer mailing lists to web forums -
probably because I vastly prefer mutt to any forum web site I have
seen. But if I have to use a discussion board, then by far the best
one I have seen is Discourse. I first
heard about it in a couple of episodes of the Ruby Rogues
podcast.12
When the Rogues moved their mailing list from Google Groups to
Discourse, I was a little annoyed because Discourse didn’t support
‘just email it all to me’ and I just never get around to checking web
sites. Fortunately the good folks at Discourse added setting that
allow folks like me to get all our Ruby Parley
goodness by email. But for those who like web forums, Discourse has
some really nice features - keeping track of what is new, likes,
bookmarks, follow individual posts, private messages and a really nice
reputation system.
At work we are looking for a discussion board for a project we are
doing and I suggested Discourse. In addition to the hosted
discussions, there is an open source version of the same code
available on GitHub. The
recommended way to install Discourse is to use the supplied
Docker images.
Setting up my Vagrant
To get started I needed an Ubuntu 14 LTS disk image. The one
distributed by Hashicorp labeled “lattice/ubuntu-trusty-64” is
configured to use 4G memory and isn’t running chef or puppet (so
nothing to interfer with the Docker stuff from Discourse).
Installing Discourse
I misread the instructions and instead of typing ./launcher bootstrap app,
I did ./launcher start app. That gave some reasonable looking output before
it bailed out complaining:
Doing ./launcher rebuild app did reasonable looking things like
starting databases, checking out code from git, running database
migrations, and compiling assets.
I tried modifying my /etc/hosts file to include
discourse.example.com but no dice - I think because my Vagrant isn’t
exposing port 80. So I shut down the vagrant box and edited the
Vagrant file to uncomment
Now http://discourse.example.com:8080 works. Since I can don’t
really need this to answer to a specific name, I took the modification
back out of /etc/hosts and will just use
http://localhost:8080 to play with configuring and testing Discourse.
With Rails ‘respond_to’ method, it is really easy to return differnt
kinds of data from the same controller method based on the url
extension. I was really happy with this when I was building a a site
to provide a read-only version of an event calendar I had built years
earlier on the ArsDigita Community System. To get data out of that
Oracle database, I built a Rails web site that let users ask for
events by category, sponsor, or lecture series with a variety of date
restrictions. The data was consumed by several different clients - two
who wanted XML and several others who wanted JSON. I could use exactly
the same URLs by just adding .xml or .json to the url. Even better, I
could have controller render an html version of the same data which
developers could use to preview the result sets. Once they had all the
parameters the way they wanted, they just copied the url and added
‘.json’ ahd used it in their web pages.
But now I actually want to move into the 21st century and build some
web pages with AJAX interactions. Rails has supported a variety of
methods of working with Javascript starting with the RJS methods in
the version 1.x days. Currently Rails (4.2.0) uses JQuery by default -
including a jquery-ujs package which, when combined with the ‘remote:
true’ argument to ‘form_for’, makes it simple to have your forms
submit via AJAX instead of a normal server request.
First, the regular form_for helper:
But with ‘remote: true’ (note no authenticity_token):
This comes into the server and is interpretted as a JS request:
which will fall into the format.js case in the normal respond_to
stanza, which by default, will try to render the create.js.erb
template in your courses views. This view should jquery directives
that will be executed in the browser.
This flow has some pros and cons. On the bright side, it is relatively
easy (as long as you remember to use ‘escape_javascript’ on any html
you produce. But it means you have a fair amount of javascript
scattered small files in your views directory. It is JS that
manipulates your html so in that sense it is a view. But even if I am
less than enthusiastic about that aspect, the big selling point is
that I can reuse the code in my view partials. The exact same embedded
ruby (erb) that created the table rows that loaded when the page first
loaded is the erb that creates the new row JQuery inserts when I
finish adding a new course.
Speaking JSON
If you want to keep all of your JavaScript together, for example if
you are using a more extensive front end framework, then you may want
to have your AJAX interactions communicate as JSON. The adjustments
you need to make for that start with adding a data-type argument to
your form tag:
If you don’t add a different format section, this will also end up in
the format.js section and will send back the jquery selectors and
manipulations as before - with a Content-Type header of
‘text/javascript’. However, since the browser wasn’t expecting to get
JavaScript, it doesn’t evaluate it (at least Chrome doesn’t). So I can
see the server responding without error, and the data coming into the
browser with a status code of 200. No JavaScript errors register in
the console - but nothing happens on the page. All in all my least
favorite type of error - no error, just nothing happens.
So to correct that, we add a ‘format.json’ section to our ‘respond_to’
block:
So now the browser gets back a JSON representation of the new item
with a Content-Type header of ‘application/json’. Then it is up to us
to write the client-side JavaScript - for example to add the table row
with the new element. That might look something like this (untested code):
This keeps all your JavaScript on the client side - but at the expense
of mixing some html into it as you build the new table row.
We just got new Macs at work - which came with Yosemite
preinstalled. There are a few odds and ends that I am not thrilled
with, but overall I have found the transition from Lion to be quite
smooth. So guess it is time to upgrade my personal laptop - before
any more security issues are announced.
I went to the App Store and downloaded the installer and ran it. All
seemed to be going well until the point where the installer thought it
had “2 minutes left”. Two hours later, I gave up and went to bed,
leaving it running. I have no idea when it actually finished but in
the morning, my machine was asleep. When I woke it up, I got the odd
gray screen overlay, with the blinking gray startup bars near the
bottom - like you get when you have run the battery completely out. I
was wondering if either the install had failed (probably not, since
the screen behind the gray overlay was the new, flat, Yosemite look)
or if the machine had overheated and crashed. It got very hot during
the install but I had not heard the fan running. I did what I could to
leave the bottom of the machine with good airflow so it could cool off
but that was all I could do for that issue.
Anyway, despite the odd gray screen, the machine started up OK, let me
log in, and then ran some “setting up your new mac” things. Looks
OK. I launched the app store and installed the two security updates
that have come out since Yosemite launched. I also downloaded the Mac
office apps, pages, keynote, numbers….
XCode
I downloaded the newest XCode (6.1) and installed it. The first time
it launched, I was asked to allow it to upgrade the device support -
which it did and then promptly crashed. When I tried to launch it
again, it crashed again. But third time is the charm. It again asked
if it could run some (different) updates, but this time, it then gave
me the normal XCcode interface. Until I get around to learning iOS
programming, I don’t really have much use for the XCode GUI - except
for using the iOS simulator as one of the targets for Karma to run
JavaScript tests in a browser. The reason I need XCode is for the
compiler. To get the command line tools installed, I ran xcode-select
--install and agreed to install the command line tools. Then I ran
xcodebuild --license. This was supposedly to accept the Xcode
license - but just like when I did it for setting up my work machine,
I got xcodebuild: error: invalid option '--license'. That Chris said
that indicated that I had already accepted the license; not sure about
any of that, but it doesn’t seem to have interfered with compiling
stuff at work, so I am not going to worry about it.
X11
X11 no longer comes with OsX, but when I tried to use the X11 icon in
my dock, it helpfully redirected me to this page
which allowed my to download XQuartz 2.7.7
The installer ran just fine and at the end told me I had to log out
and back in again to make this the default X11 for my machine.
Homebrew
I should have done a little checking and preparation before I did my
upgrade. At least with Mac Ports, you generally can’t run any of the
port commands after you have upgraded the OS. This means that it is
really hard to figure out which packages you had installed (in
particular, which were installed explicitly and which were installed
because they were dependencies for something else you asked for)
unless you ask for a list before doing the upgrade. While I was
looking for blog posts about updating my homebrew-installed software,
I found this blog post
which explains why my Yosemite upgrade took FOREVER once it got to the
2 minutes left stages. Something about moving /usr/local and then
moving all the files back… one by one. Oh well, at least now I know
what that was about.
Fortunately, it looks like Homebrew will still basically work after
the upgrade. I can run brew list and get a reasonable list of
installed packages: emacs, docbook, graphviz, imagemagick, mongodb,
mysql, node, postgresql, python, wget, etc. The blog post above
recommended running brew doctor which gave me several sets of
advice:
some broken symlinks to remove with brew prune
warnings about unlinked kegs
a keg-only formula for libxml2 that is linked into /usr/local
a warning that I should run brew update to get the latest formula info
The last one seemed reasonable, so I started there. The output of that
command (and some reading of docs and the install script) explained
why brew survived my upgrade. Homebrew just uses the system ruby -
which is always there and should always run on the installed OS. And
it explains why I don’t see separate ‘upgrade homebrew itself’ and
‘update the package list’ commands. Both are taken care of by running
brew update which checks out the latest software and package
information from Github:
The first piece of advice also seemed reasonable - clean up dangling symlinks:
Then, I ran brew outdated to see what needs updating:
I think I want to update just about everything. But I am a little
concerned about upgrading the databases. Let’s start by pinning them,
then upgrading everything else, and then upgrading them one at a
time. But first, do they run now?
Looks like mysql is running - I see processes for mysqld_safe and
mysqld in ps and I can log in as root and poke around. The postgresql
server is not running at the moment and I mostly have not been using
it. So I am just going to pin those two packages and upgrade
everything else:
Then the updates all ran - some giving messages, mostly about how to
launch daemons or warning you not to link certain libraries because
they would conflict with the OsX versions. To pick up the last few
upgrades, I did as suggested and ran brew link fontconfig pixman and
then reran brew upgrade. I will save the output of all of these
upgrades in case I need to go back to it.
MySQL
The MySQL upgrade is only one minor version different, so I think I’ll
try to do that upgrade. About the only database I care about in the
current one might be the cardsharp_dev database, so I am not too
worried about the data even if there is some strange incompatibility.
Logged in and looked around. Seems fine at least at a glance.
Starting and stoping services
While I was looking around for how to stop MySQL before the upgrade, I
found two useful resources, this launchctl tutorial
and this blog post on a brew command to replace all this: brew services
Postgres
Perpahs I should leave it but I would really like to have everything
upgraded for the start of the new year. I had trouble getting to old
databases after a previous postgres upgrade. So first I want to take a
backup. And to do that, I need to start the database server…..
It looks like the [Yosemite upgrade removed a few empty directories]
(http://stackoverflow.com/questions/25970132/pg-tblspc-missing-after-installation-of-os-x-yosemite)
that we need:
I logged in and looked around. Looks like there are 2 tablespaces (\db)
both owned by me - and a single schema (\dn) - also owned by me. What
I missed, was how to list the databases: \l. So I missed doing the
pg_dump before I upgraded the database. Fortunately there is an
upgrade command which I found detailed at
https://kkob.us/2014/12/20/homebrew-and-postgresql-9-4/.
There was a lot more output - and eventually the upgrade failed
because I had already created the database ‘cnk’. So let’s remove the
new stuff and try again:
The end of the initdb output mentioned it was enabling ‘trust
authentication’ for all local connections. I only want my user to be
able to log into postgres. So before starting up the database, I
edited /usr/local/var/postgres/pg_hba.conf to change the user from
‘all’ to ‘cnk’.