While setting up refinerycms recently, I kept running into the following error:
speakingurl-rails.rb:8:in `block in <class:Railtie>': undefined method `prepend_path' for nil:NilClass (NoMethodError)
This happened both when creating a new application and integrating refinerycms into an existing application. It occurred when running rails generate refinery:cms and other rake tasks like rake db:migrate. After some searching, I came across this explanation.
What happened was the latest version of sprocket-rails has be published in rubygems. speakingurl-rails, which is dependency for slug support, has not yet been updated to support the new sprocket-rails gem. To resolve this, I had to:
I attempted to deploy my Rails app to AWS OpsWorks and get the following error:
[2016-01-01T08:39:36+00:00] ERROR: Running exception handlers
[2016-01-01T08:39:36+00:00] ERROR: Exception handlers complete
[2016-01-01T08:39:36+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2016-01-01T08:39:36+00:00] ERROR: deploy[/srv/www/myapp] (deploy::rails line 65) had an error: Chef::Exceptions::Exec: if [ -f Gemfile ]; then echo 'OpsWorks: Gemfile found - running migration with bundle exec' && /usr/local/bin/bundle exec /usr/local/bin/rake db:migrate; else echo 'OpsWorks: no Gemfile - running plain migrations' && /usr/local/bin/rake db:migrate; fi returned 1, expected 0
[2016-01-01T08:39:36+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
OpsWorks doesn’t give the best logging from the AWS Console. All we know from this is that the deployment failed on the following line:
if [ -f Gemfile ]; then echo 'OpsWorks: Gemfile found - running migration with bundle exec' && /usr/local/bin/bundle exec /usr/local/bin/rake db:migrate; else echo 'OpsWorks: no Gemfile - running plain migrations' && /usr/local/bin/rake db:migrate; fi
This means that it failed during a migration, but several possible issues could have caused that call to fail. As John C. Bland suggests, the best option to SSH into the EC2 instance and get the stacktrace.
I ran into an issue is week where our NW.js desktop application keep running into the same issue, even after a refresh reinstall. The user had this application installed on their local machine before. The answer was that we also need to delete the local app data, store in:
For Mac OS X, ~/Library/Application\ Support/SOME_NW_APP
For Windows 8, C:\Users\%USERNAME%\AppData\Local\Chromium\User Data\Default
AWS Elastic Beanstalk simplifies application management, with server scaling out-of-the-box. Scaling is pretty idiot-proof. In fact, if you try to scale down to zero instances, you get the following:
MaxBatchSize: Invalid option value: '0' (Namespace: 'aws:autoscaling:updatepolicy:rollingupdate', OptionName: 'MaxBatchSize'): Value is less than minimum allowed value: 1
A few weeks ago, I was asked to scale an environment down to determine if there was any adverse effect in terminating it. Basically, we would scale down to zero and see if anyone missed it being alive. In order to do this, we had to go around Elastic Beanstalk and using the Auto Scaling Group:
In the AWS Console, go to Compute -> EC2
In EC2 Dashboard, go to AUTO SCALING -> Auto Scaling Groups
Use the Filter text box to find the auto scaling group for my environment. Typing in the environment name (e.g. stag-rails-app-s1) should work.
Our team had been seeing instability in many of our Docker environments on ElasticBeanstalk. This usually meant we had to rebuild our environments to get it working again. While researching possible causes, we came across a post about the PID 1 Zombie Reaping problem. I won't be going into detail on why this is a problem as the post covers it pretty thoroughly. Here was our problem: on deploys, zombie processes gets left behind when we kill a container's process to start a new one.
To resolve this issue, we must understand the difference between eval and exec. In eval, the process spawns a child process; in exec, you stay in the same process, which is what we want.
The CMD instruction in the Dockerfile accepts both formats.
CMD command param1 param2
Reference to the CMD instruction can be found here.
An important note is that this is not a complete solution to our problem. If command was a start script, we need to also exec the final command in that script. Otherwise, you will get stuck in the script.
For example, if the start script ended with:
unicorn -c docker/config/unicorn.rb
it would need to be changed to:
exec unicorn -c docker/config/unicorn.rb
Thanks to Tung, Benson and Eddie for explaining this to me.
Docker has become very popular in the last few years. At work, we've been using it for over a year now. As we develop, images for our apps are becoming very large. This is becoming an issue because building, pulling and pushing these images are taking longer and longer. Furthermore, our deployment process has been intermittently failing when doing docker push to Docker Hub; this happens particularly often when upstream internet is slow. Our team tried a handful of different techniques to shrink our image sizes.
This week, I'll go over the different techniques to shrink Docker images. We'll look at the advantages and disadvantages. Depending on your engineering organization and architecture, some will worker better than others.
Our team looked at a number of factors before choosing debian as our base. As ruby is our primary development language, we wanted to base our images on a distro that had strong community support. Debian also provides tooling that our dev ops team is familiar with.
There have been a few shops that are using alpine in production: https://blog.codeship.com/build-minimal-docker-container-ruby-apps/
While alpine isn't something we'll be using, it may be worth revisiting in the future as adoption picks up.
UPDATE: (03/24/2016) Docker just announced that they build their new engine on Alpine Linux. There looks to be some momentum building for Alpine.
As I stated earlier, we are primarily a Ruby on Rails organization. As such, there are plenty of ruby and rails community-supported docker images available to used as base images.
We took a look at some of the ruby and rails images we use. All these images are debian based. Even amongst minor versions, we found large differences in size. In general (though not always the case), the latest minor version will be the smallest size. Since minor versions are backwards compatible, we should be on the latest minor version. It is a good idea to standardize the ruby and rails version we use for our services. Unless there's a specific reason to deviate, we should sit to one version for consistency.
Below is are some of the versions I was able to download and their sizes: