I used Kitchenplan to
set up my new mac. There is newer configuration option based on
Ansible by the same author -
Superlumic. I would like
to try it but didn’t have time to experiment with this time around.
The big plus for using Kitchenplan was that our small development team
ended up with Macs that are all configured more or less the same
way. Another plus is it installs bash_it which does a lot more shell
configuring than I have ever bothered to do. The only thing I have
found not to like is that it wants to invoke git’s diff tool instead
of the regular unix diff. To shut that off, I just edited the place
where that was set up. In /etc/bash_it/custom/functions.bash (line
72) I commented out:
I am trying to improve the test coverage of our work project and
needed to test the avatar upload that is associated with creating
users in our project. I didn’t find any place that laid out how to
test file uploads. Fortunately the tests for the easy-thumbnails app
we use are pretty good and I was able to piece something together
using their code as a model.
In case anyone else is looking for something like this, I updated my
easy thumbnails example project
to include a couple of tests.
I am working on a project that has two separate but interrelated
Django web sites (projects in Django’s parlance). In an earlier blog post,
I described setting up the second project (mk_ai) to have read-only
access to the first project’s database (mk_web_core) in dev but then
getting around those access restrictions for testing. The main thing I
need for testing is a big, set of hierarchical data to be loaded into
the first project’s test database. I can use the manage commands
dumpdata and loaddata to preserve date in my development environment,
but when I tried to load that same data into the test database, I ran
into problems.
We are using GenericForeignKeys and GenericRelations.
Django implements GenericForeignKeys by creating a database foreign
key into the django_content_type table. In our mixed database setup,
my django_content_type table is in the mk_ai schema. So, even if I set
up my database router to allow_relation across databases AND the
postgres database adapter would even attempt to make that join, the
content types in the references in mk_web_core would not be in mk_ai’s
django_content_type table. So we can’t use Django’s GenericForeignKeys.
What shall we do instead?
Rails implements a similar type of relationship with a feature it
calls Polymorphic Associations.
Django stores the object’s id + a FK link to row in the content_type
table representing the the object’s model. Rails store’s the object’s
id + the object’s class name in a field it calls
_type. I decided to use the Rails method to set up
my database representations. That replaces the GenericForiegnKey
aspect. To replace the GenericRelation part, I just created a case
statement that allows queries to chain in the approrpriate related
model based on the ... content type. Perhaps showing an example will
make this clearer.
The original way, using Django's GenericForeignKey:
The 'rails' way, using a block_type name field that can be read
directly in the mk_ai schema.
GenericForeignKey and GenericRelation are two sides of the coin - they
allow you to easily make queries both directions. In our domain, I
don't really have much occaision to go from Block to Page, so I don't
really need to GenericRelation. However, if you need to replace it,
you can create a method to do the appropriate query.
We are using React at work. The official documentation
is really good - especially the Thinking in React
section. But I could still use some additional examples, ideas,
etc. so when I saw an offer for several books on React,
including one from Arkency that I had been considering buying for a
while, I broke down and bought the bundle. The first one I read is
“React Under the Hood” by Freddy Rangel.
Overall it is a really good book. The author points out that some of
the ideas in react come from the game development world. To emphasize
that, the example code for the book is a Star Trek ‘game’. The author
provides a git repository
you can clone to get started. The project is set up to use babel and
webpack and a node dev server - all of which just work out of the
box. I need to dig into one of the other books from the Indie Bundle,
Survive JS, to learn more about how to set
these up. You build almost all of the code - except for the rather
hairy navigation and animation parts which are available in the clone
you are encouraged to use to get started.
The example stresses good engineering practices - especially having
one or two smart components that control all state mutation and lots
of well separated dumb components that just render the approprite info
for a given state. I really liked the EditableElement component and
will probably steal it for a play project I want to do after
completing this book.
The author did not use ES6 syntax because it might be unfamiliar to
some people. I actually find the new syntax easier so I translated
most things into using ‘let’ instead of var and all seemed to go just
fine. The other change I made throughout is to the module.exports for
each .jsx file. The book suggests starting each class like this:
If you do this, the React plugin for the Chrome Developer tools just
labels each component as ... which means you have
to dig around for the section of rendered code you want to
inspect. The project I am on at work uses a slightly different syntax
but one that is a lot easier to read and understand:
If you do this, then the React debugging tab now shows this component
as <Something attr1=xxx…></Something> which makes it a LOT easier to
find the code you want to inspect.
The example was good but the best part was the great material in the
final chapter. It discusses
PropTypes (which I had heard of but forgotten).
getDefaultState and getDefaultProps (haven’t used them, but they
might come in handy).
How to profile you code with Perf - and then some suggestions about
what to do about what you find. Good information about how to improve
performance of components that are basically render only (per the
design espoused in the rest of the book) using a React add on called
PureRenderMixin. I am going to have to look into mixins.
I am currently working on a project that has a main public web site
(mk_web_core) and then a separate AI (mk_ai) application that needs
access to a large percentage of the information in the public site’s
database. Since the AI only makes sense in the context of the public
web site, one option might be to make them a single
application. However, we are planning to experiment with different
versions of the AI, so it seems sensible to separate them and develop
an API contract between the two halves.
My first thought was to completely separate the two - separate code
bases and separate databases. However, the main application has a
deeply nested hierarchical schema and the AI needs to know all the
gorey details of that structure. So if we completely separate the two
apps, we need to build something to keep the two views of that
hierarchy in sync. We will eventually need to do that - and then build
an extract, transform, and load (ETL) process for keeping the AI in
sync with the main site. But for now, we are going to put that off and
instead allow the AI read-only access to the information it needs from
the main site.
Django has built in support for multiple database connections so
getting things set up so my AI site could read from the mk_web_core
database was pretty straightforward. The
documentation on multiple datbases
indicated that one should create a database router for each database
and then in my settings.py file give DATABASE_ROUTERS a list
containing the two routers. After setting up the database
configuration, I copied the model files from the mk_web_core project
into corresponding app locations in the mk_ai project. I did not want
the mk_ai project to make any changes to the mk_web_core schema, so I
added managed = False to the Meta class for each model class.
Tests That Depend On The “Read-Only” Database
The original two database router configuration seemed to work but then
I decided I really had to write some unit tests for the mk_ai
application. The mk_web_core application already has unit tests. And
since it is fairly independent - it only interacts with the AI system
through a single “next recommendation” API call - it is easy to mock
out the way it depends on mk_ai without compromising my confidence in
the tests. However, the behavior AI application depends strongly on
the data from the mk_web_core application. So to create any meaningful
tests, we really need to be able to create specific data in a test
version of the mk_web_core database. So all of the configuration I did
to prevent the AI application from writing to the mk_web_core schema
made it impossible to set up my test database. Hmmm.
So I removed the managed = False from each model’s Meta class and
tried to figure out how to set up my routers so that I can write to
the mk_web_core database test database, but not the mk_web_core
production database. I struggled for a while and then I found this
blog post from NewCircle.
After some trial and error, this router appears to do what I need:
It is somewhat confusing that even though the tables for migrations
from the materials app are not created, the migrations from
materials are listed when you run python manage.py
showmigrations and are recorded in the django_migrations table.
Django has support for test fixtures
in its TestCase. But the fixtures are loaded and removed for each and
every test. That is excessive for my needs and will make out tests
very slow. I finally figured out how to load data into mk_web_core
once at the beginning our tests - using migrations: