update with chef-cookbook/docker v4.9.2

This commit is contained in:
Jeffrey Martin 2019-02-17 00:02:05 -06:00
parent 1635028a8e
commit c238a6a46d
No known key found for this signature in database
GPG Key ID: 0CD9BBC2AF15F171
117 changed files with 9305 additions and 3350 deletions

View File

@ -0,0 +1,7 @@
source 'https://supermarket.chef.io'
metadata
group :integration do
cookbook 'docker_test', path: 'test/cookbooks/docker_test'
end

View File

@ -2,111 +2,470 @@
This file is used to list changes made in each version of the docker cookbook.
## 12.15.2 (2017-02-15)
## 4.9.2 (2019-02-15)
- Support setting shared memory size.
## 4.9.1 (2019-02-01)
- added systemd_socket_opts for additional configuration of the systemd socket file
## 4.9.0 (2018-12-17)
- Add support for windows - [@smcavallo](https://github.com/smcavallo)
- Expand ChefSpec testing - [@smcavallo](https://github.com/smcavallo)
- Fix for when HealthCheck is used - [@smcavallo](https://github.com/smcavallo)
## 4.8.0 (2018-12-09)
- Fix issues with network_mode in docker_container - [@smcavallo](https://github.com/smcavallo)
- Add support for container health_check options - [@smcavallo](https://github.com/smcavallo)
- Add new docker_image_prune resource - [@smcavallo](https://github.com/smcavallo)
## 4.7.0 (2018-12-05)
- Added 17.03 support on RHEL 7. Thanks @smcavallo
- Added 18.09 support. Thanks @smcavallo
## 4.6.8 (2018-11-27)
- add missing new_resource reference that prevented docker_container's reload action from running
## 4.6.7 (2018-10-10)
- Add :default_address_pool property to docker_service
- Import docker.com repository gpg key via HTTPS directly from docker to avoid timeouts with Ubuntu's key registry
## 4.6.6 (unreleased)
- :default_ip_address_pool property added to configure default address pool for networks created by Docker.
## 4.6.5 (2018-09-04)
- package names changed again. looks like they swapped xenial and bionic name schema.
## 4.6.4 (2018-08-29)
- xenial 18.03 contains the new test version format
## 4.6.3 (2018-08-23)
- refactor version_string
## 4.6.2 (2018-08-23)
- Use different version string on .deb packages
## 4.6.1 (2018-08-21)
- Include setup_docker_repo in docker_service and allow old docker-ce versions for centos
## 4.6.0 (2018-08-19)
- Bump docker version to 18.06.0
## 4.5.0 (2018-08-16)
- sets the default log_level for the systemd docker service back to nil
- change require relative to library path
- docker_execute -> docker_exec
- Loosen up the requirement on docker-api gem
- Add new docker_plugin resource
## 4.4.1 (2018-07-23)
- Adding tests for docker_container detach == false (container is attached)
- Add new_resource and current_resource objects as context for methods when telling a container to wait (when detach is false)
## 4.4.0 (2018-07-17)
- docker service :log_level property converted to String.
- Use new package versioning scheme for Ubuntu bionic
- Bump the docker version everywhere
## 4.3.0 (2018-06-19)
- Remove the zesty? helper
- Initial support for Debian Buster (10)
- Bump the package default to 18.03.0
- Remove old integration tests
- Update package specs to pass on Amazon Linux
## 4.2.0 (2018-04-09)
- Initial support for Chef 14
- Remove unused api_version helper
- Support additional sysv RHEL like platforms by using platform_family
- Added oom_kill_disable and oom_score_adj support to docker_container
- ENV returns nil if the variable isn't found
- Remove the TLS default helpers
- Move coerce_labels into docker_container where its used
- Add desired_state false to a few more properties
- If the ENV values are nil don't use them to build busted defaults for TLS
- Remove a giant pile of Chef 12-isms
- Kill off ArrayType and NonEmptyArray types
- Don't require docker all over the place
- Kill the ShellCommand type
- Fix undefined method `v' for DockerContainer
- Make to_shellwords idempotent in DockerContainer
- Fix(Chef14): Use property_is_set with new_resource
- Use try-restart for systemd & retry start one time
## 4.1.1 (2018-03-11)
- Move to_snake_case to the container resource where it's used
- Reduce the number of coerce helpers in the the container resource
- Remove the Boolean type and instead just use TrueClass,FalseClass
- Use an actual integer in the memory_swappiness test since after reworking the coerce helpers we're requiring what we always stated we required here
## 4.1.0 (2018-03-10)
- Remove required from the name property. This resolves Foodcritic warnings in Foodcritic 13
- Resolve a pile of Chef 14 deprecation warnings in the container and images resources
- Remove support for Ubuntu 17.04 from the installation_package resource
- Moved all the helper libraries into the resources themselves. This is part 1 of the work to get these resources ready for inclusion in Chef 14
- Removed the version logic from installation_package when on Amazon Linux. Since we don't setup the repo we only have a single version available to us and we should just install that version. This resolves the constant need to update the hardcoded version in the cookbook every time Amazon releases a new Docker version.
## 4.0.2 (2018-03-05)
- Flag registry password property as sensitive in docker_registry resource
## 4.0.1 (2018-02-07)
- allow labels to have colons in the value
## 4.0.0 (2018-01-15)
### Breaking Changes
- Default to Docker 17.12.0
- Remove previously deprecated support for Debian 7 / CentOS 6\. Currently supported released of Docker do not run on these platforms.
- Removed support for the EOL Docker 1.12.3
- Removed the ChefSpec matchers which are no longer needed with ChefDK 2.X
- Remove the broken legacy binary installation resource. This was only used by very old EOL docker releases
- By default setup the apt/yum repos in the package install resource so that out of the box there's no need for additional cookbooks. If you would like to manage your own docker repos or other internal repos this may be disabled by property. Due to this change the cookbook now requires Chef 12.15+
### Other Changes
- Greatly expand Travis CI testing of the cookbook and use new InSpec resources for Docker instead of shelling out
- Add support for Ubuntu 17.10
- Update Fedora support for new DNF support in Chef
- Minor correctness and formatting updates to the readme
- load internal and ipv6 status for existing docker_network resources
- Update Amazon Linux to default to 17.09.1, which is the current version in their repos
- Fix the remove action in docker_installation_script
- Replace deprecated graph with data_root. Graph will now silently map to data_root
- Pass --host instead of -H in docker_service for clarity
- Make sure tar is installed to decompress the tarball in the docker_installation_tarball resource
- Update the download path for Docker CE to unbreak docker_installation_tarball
- Allow specifying channels in the docker_installation_tarball resource so you can install non-stable releases
## 3.0.0 (2017-12-22)
- Install docker-api via gem metadata. This bumps the required chef release for this cookbook to 12.10+
- Removed support for Ubuntu Precise
- Reworked the init system detection logic to work on additional platforms and without hardcoded distro version numbers
- Removed shasums from the binary installation resource for Docker 1.6-1.9.1 which are long ago EOL Docker releases
- Test on newer releases of openSUSE and Fedora and test on the latest Docker release
## 2.17.0 (2017-11-10)
- Update Amazon Linux to default to 17.06.2
## 2.16.4 (2017-10-30)
- quote log_opt
## 2.16.3 (2017-10-26)
- add init support to docker_container
## 2.16.2 (2017-10-05)
- fix for ip_address not being set
## 2.16.1 (2017-10-05)
- added support for env_file property
- bumping to 17.09.0
## 2.16.0 (2017-09-18)
- Use docker-api 1.33.6 which includes a few fixes
- This cookbook actually requires Chef 12.7+ so make sure that's mentioned everywhere
- Simplify debian/ubuntu detection code
- Remove support for long ago EOL Ubuntu distros like 15.04/15.10
- Update Amazon Linux to default to 17.03.2
## 2.15.29 (2017-09-12)
- Resolve Chef 14 deprecation warnings in docker_network
- Resolve new_resource warnings in docker_service
- Remove yum from the Berksfile
## 2.15.28 (2017-09-07)
- bumping to 17.06.2
- GH-910 image push needs to pass the credentials and a specific tag
## 2.15.27 (2017-08-31)
- restart docker on rhel sysvinit changes
## 2.15.26 (2017-08-25)
- bumping to 17.06.1
- support for debian 9
## 2.15.25 (2017-08-24)
- notifying :stop and :start instead of :restart in upstart service manager
## 2.15.24 (2017-08-20)
- Supporting env_vars and using in systemd
## 2.15.23 (2017-08-20)
- Fixing bug in volumes introduced with namespacing fixes
## 2.15.22 (2017-08-20)
- Fixing up deprecation warnings
## 2.15.21 (2017-08-07)
- fix to_bytes parsing
- host port can now be a range and matches properly with container port range
- typo on security_opt
- fix for docker_service not containing a listening socket
## 2.15.20 (2017-08-04)
- Using stable docker package version numbers
## 2.15.19 (2017-08-04)
- reverting default_group
- adding docker group to README
## 2.15.18 (2017-07-20)
- create the socket first so restarts on the service unit file don't fail
- redhat defaults to a different group name
- socket group shouldn't be hardcoded
- docker_network: support ipv6 & internal
## 2.15.17 (2017-07-18)
- adding restart notifications to upstart and cleaning house on the configs
- fix docker socket group being empty
- bring systemd unit file closer to stock
## 2.15.16 (2017-07-14)
- Issue #849 Fix service restarts on OS using systemd
## 2.15.15 (2017-07-10)
- upstream systemd config no longer contains the slave mount flag
## 2.15.14 (2017-07-03)
- Simplifying kitchen config
- Using dokken-images to speed up tests
- Updating Amazon Linux to default to 17.03.1
- Package helper for Debian 9
## 2.15.13 (2017-06-15)
- kill_after property default value to nil
- only use --raw-logs argument in versions which support it
## 2.15.12 (2017-06-13)
- reverting gem metadata for now as it requires build tools dependency for the json gem
## 2.15.11 (2017-06-13)
- make docker.service override match closer to stock
## 2.15.10 (2017-06-13)
- adding support for chef >= 12.8 metadata gem installs
- using docker-api 1.33.4
## 2.15.9 (2017-06-13)
- updating systemd docker.service with changes from official docker install
- 12.04 doesn't support docker 17.05.0
## 2.15.8 (2017-06-12)
- Bumping to latest docker version
## 2.15.7 (2017-06-12)
- Adding Ubuntu Zesty 17.04 support
## 2.15.6 (2017-05-01)
- # 853 - Add network_aliases support
- # 854 - Expose package_name through the docker_service resource
## 2.15.5 (2017-04-19)
- Fixing up memory related API keys
- Adding KernelMemory
- Adding MemorySwappiness
- Adding MemoryReservation
- Fixing MemorySwap convergatude (bug #833)
- Allowing for both integer and string input for all memory values
## 2.15.4 (2017-04-19)
- Fixing security_opt property
## 2.15.3 (2017-04-18)
- Updating for 17.04.0
## 2.15.2 (2017-02-15)
- Reverting 12.15.1 changes
## 12.15.1 (2017-02-15)
## 2.15.1 (2017-02-15)
- 799 - Adding service restarts to systemd template resources
## 12.15.0 (2017-02-15)
## 2.15.0 (2017-02-15)
- Removing dependency on compat_resource.
- Now requires Chef 12.5 or higher.
## 12.14.3 (2017-02-14)
## 2.14.3 (2017-02-14)
- Defaulting package installation version to docker 1.13.1
## 12.14.3 (2017-02-06)
## 2.14.3 (2017-02-06)
- Reverting gem vendor due to c extensions in json dep.
- Using docker-api-1.33.2 in _autoload
## 12.14.2 (2017-01-31)
## 2.14.2 (2017-01-31)
- Vendoring docker-api-1.33.2
## 12.14.1 (2017-01-31)
- defaulting to package installation on amazonlinux
## 2.14.1 (2017-01-31)
- defaulting to package installation on Amazon Linux
## 2.14.0 (2017-01-31)
- various updates for Docker 1.13.0
- defaulting to 1.13.0 for docker_installation
- package name fixes for new debian/ubuntu schemes
- defaulting restart_policy to nil in docker_resource
## 2.13.11 (2017-01-25)
- #798 - Temporary "fix" for delayed service restart: using :immediate
- # 798 - Temporary "fix" for delayed service restart: using :immediate
notification in docker_service resource
## 2.13.10 (2017-01-13)
- #800 - fixing ubuntu startup script
- #802 - using chef_version methong only in 12.6.0 and higher
- # 800 - fixing ubuntu startup script
- # 802 - using chef_version metadata property only in 12.6.0 and higher
## 2.13.9 (2016-12-29)
- 793 - Removing service restarts due to chef-client behavior changes.
## 2.13.8 (2016-12-28)
- #794 - network mode bridge
- # 794 - network mode bridge
- removing emacs package in upstart provider
- Adding dokken / travis test matrix
## 2.13.7 (2016-12-24)
- adding additional logging drivers
- adding action :reload
## 2.13.6 (2016-12-22)
- adding ip_address support for docker_containers
- adding volume_driver support
## 2.13.5 (2016-12-21)
- Temporary work around for broke upstart provider in chef-client
- Fixing package name for ubuntu version later than 1.12.3
## 2.13.4 (2016-12-20)
- Fixing comparison operator docker daemon args for versions < 1.12
## 2.13.3 (2016-12-20)
- 792 - Reverting 791 fix
## 2.13.2 (2016-12-20)
- 791 - Fix logic bug in docker_service daemon args calculation
## 2.13.1 (2016-12-19)
- #786 - Adding options hash to docker_volume connection
- #787 - Adding wait loop to docker_service_manager_execute :stop
- # 786 - Adding options hash to docker_volume connection
- # 787 - Adding wait loop to docker_service_manager_execute :stop
## 2.13.0 (2016-11-25)
- Adding sysctl property to docker_container resource
## 2.12.0 (2016-11-25)
- Updating compat_resource dep to 12.16.2
- Updating docker-api gem dep 1.32.1
## 2.11.1 (2016-11-24)
- Fix for #701 - Revert commit that caused restart loops in systemd provider
## 2.11.0 (2016-11-23)
- make systemd MountFlags configurable
- make running wait time configurable
## 2.10.0 (2016-11-23)
- Implement network connect/disconnect
- Fixed dns options mutual exclusion
- Misc test harness cleanup
## 2.9.10 (2016-11-14)
-renaming systemd_conf to systemd_args due to a conflict with systemd cookbook
## 2.9.9 (2016-11-14)
-Fixing resource idempotence in labels property
-Fix regression introduced by #741, breaking Debian installation
-Added ro_rootfs => ReadonlyRootfs special cases mapping
-Enable systemd options as a docker_service attribute
-Fixing resource idempotence in labels property -Fix regression introduced by #741, breaking Debian installation -Added ro_rootfs => ReadonlyRootfs special cases mapping -Enable systemd options as a docker_service attribute
## 2.9.8 (2016-11-08)
- Fixed a typo in an error message
- Enable tarball install through docker_service
- option log_opt is defined as --log-opt value1 --log-opt value2 instead of --log-opt=value1 --log-opt=value2
- Depend on a working compat_resource cookbook
## 2.9.7 (2016-10-14)
- Require the most recent compat_resource
- Get foodcritic passing
- Update the Rakefile and use cookstyle
@ -114,34 +473,42 @@ This file is used to list changes made in each version of the docker cookbook.
- Add matchers for docker_installation_tarball
## v2.9.6
- entrypoint not entry_point README
- dockerd binary on 1.12+ for upstart
- fix docker.socket for systemd
## v2.9.5
- bumping docker-api gem
## v2.9.4
- Switch to the dockerd binary on 1.12+
- Add links to resources overview list
## v2.9.3
- add uts_mode support for docker_container provider (#730)
## v2.9.2
- adding feature ReadonlyRootfs
- bumping docker version to 1.11.2
- removing etcd, fails tests for xenial and swarm will have it builtin in 1.12
## v2.9.1
- implement userns_mode for containers
## v2.9.0
- Feature - docker_installation_tarball resource
- Patch - Adding missing http_proxy support to rhel/sysvinit
- Patch #705 - Avoid installing docker-api gem in ChefSpec
## v2.8.0
- Feature - User namespace configuration capability for docker_service
## v2.7.1
@ -845,7 +1212,7 @@ switching systemd unit MountFlags from slave to private
## v1.0.17
- Fixing up regressions in older Docker API versions introduced in cookbook release 1.0.15
- _ Adding @api_version instance variable
- Adding @api_version instance variable
- Adding serialized_log_config
- Adding parsed_network_mode
@ -1518,54 +1885,6 @@ Lots of community contributions this release -- thanks!
- Initial release
[#22]: https://github.com/bflad/chef-docker/issues/22
[#24]: https://github.com/bflad/chef-docker/issues/24
[#25]: https://github.com/bflad/chef-docker/issues/25
[#26]: https://github.com/bflad/chef-docker/issues/26
[#27]: https://github.com/bflad/chef-docker/issues/27
[#28]: https://github.com/bflad/chef-docker/issues/28
[#30]: https://github.com/bflad/chef-docker/issues/30
[#31]: https://github.com/bflad/chef-docker/issues/31
[#35]: https://github.com/bflad/chef-docker/issues/35
[#37]: https://github.com/bflad/chef-docker/issues/37
[#38]: https://github.com/bflad/chef-docker/issues/38
[#39]: https://github.com/bflad/chef-docker/issues/39
[#42]: https://github.com/bflad/chef-docker/issues/42
[#43]: https://github.com/bflad/chef-docker/issues/43
[#44]: https://github.com/bflad/chef-docker/issues/44
[#46]: https://github.com/bflad/chef-docker/issues/46
[#47]: https://github.com/bflad/chef-docker/issues/47
[#48]: https://github.com/bflad/chef-docker/issues/48
[#49]: https://github.com/bflad/chef-docker/issues/49
[#51]: https://github.com/bflad/chef-docker/issues/51
[#52]: https://github.com/bflad/chef-docker/issues/52
[#55]: https://github.com/bflad/chef-docker/issues/55
[#56]: https://github.com/bflad/chef-docker/issues/56
[#57]: https://github.com/bflad/chef-docker/issues/57
[#58]: https://github.com/bflad/chef-docker/issues/58
[#59]: https://github.com/bflad/chef-docker/issues/59
[#60]: https://github.com/bflad/chef-docker/issues/60
[#62]: https://github.com/bflad/chef-docker/issues/62
[#63]: https://github.com/bflad/chef-docker/issues/63
[#64]: https://github.com/bflad/chef-docker/issues/64
[#65]: https://github.com/bflad/chef-docker/issues/65
[#67]: https://github.com/bflad/chef-docker/issues/67
[#68]: https://github.com/bflad/chef-docker/issues/68
[#72]: https://github.com/bflad/chef-docker/issues/72
[#77]: https://github.com/bflad/chef-docker/issues/77
[#78]: https://github.com/bflad/chef-docker/issues/78
[#80]: https://github.com/bflad/chef-docker/issues/80
[#81]: https://github.com/bflad/chef-docker/issues/81
[#82]: https://github.com/bflad/chef-docker/issues/82
[#83]: https://github.com/bflad/chef-docker/issues/83
[#84]: https://github.com/bflad/chef-docker/issues/84
[#85]: https://github.com/bflad/chef-docker/issues/85
[#86]: https://github.com/bflad/chef-docker/issues/86
[#88]: https://github.com/bflad/chef-docker/issues/88
[#89]: https://github.com/bflad/chef-docker/issues/89
[#90]: https://github.com/bflad/chef-docker/issues/90
[#91]: https://github.com/bflad/chef-docker/issues/91
[#98]: https://github.com/bflad/chef-docker/issues/98
[#101]: https://github.com/bflad/chef-docker/issues/101
[#103]: https://github.com/bflad/chef-docker/issues/103
[#104]: https://github.com/bflad/chef-docker/issues/104
@ -1634,6 +1953,7 @@ Lots of community contributions this release -- thanks!
[#208]: https://github.com/bflad/chef-docker/issues/208
[#217]: https://github.com/bflad/chef-docker/issues/217
[#219]: https://github.com/bflad/chef-docker/issues/219
[#22]: https://github.com/bflad/chef-docker/issues/22
[#220]: https://github.com/bflad/chef-docker/issues/220
[#221]: https://github.com/bflad/chef-docker/issues/221
[#223]: https://github.com/bflad/chef-docker/issues/223
@ -1644,14 +1964,17 @@ Lots of community contributions this release -- thanks!
[#237]: https://github.com/bflad/chef-docker/issues/237
[#238]: https://github.com/bflad/chef-docker/issues/238
[#239]: https://github.com/bflad/chef-docker/issues/239
[#24]: https://github.com/bflad/chef-docker/issues/24
[#240]: https://github.com/bflad/chef-docker/issues/240
[#242]: https://github.com/bflad/chef-docker/issues/242
[#244]: https://github.com/bflad/chef-docker/issues/244
[#245]: https://github.com/bflad/chef-docker/issues/245
[#246]: https://github.com/bflad/chef-docker/issues/246
[#25]: https://github.com/bflad/chef-docker/issues/25
[#250]: https://github.com/bflad/chef-docker/issues/250
[#258]: https://github.com/bflad/chef-docker/issues/258
[#259]: https://github.com/bflad/chef-docker/issues/259
[#26]: https://github.com/bflad/chef-docker/issues/26
[#260]: https://github.com/bflad/chef-docker/issues/260
[#263]: https://github.com/bflad/chef-docker/issues/263
[#264]: https://github.com/bflad/chef-docker/issues/264
@ -1660,8 +1983,10 @@ Lots of community contributions this release -- thanks!
[#267]: https://github.com/bflad/chef-docker/issues/267
[#268]: https://github.com/bflad/chef-docker/issues/268
[#269]: https://github.com/bflad/chef-docker/issues/269
[#27]: https://github.com/bflad/chef-docker/issues/27
[#276]: https://github.com/bflad/chef-docker/issues/276
[#279]: https://github.com/bflad/chef-docker/issues/279
[#28]: https://github.com/bflad/chef-docker/issues/28
[#280]: https://github.com/bflad/chef-docker/issues/280
[#281]: https://github.com/bflad/chef-docker/issues/281
[#284]: https://github.com/bflad/chef-docker/issues/284
@ -1672,5 +1997,47 @@ Lots of community contributions this release -- thanks!
[#296]: https://github.com/bflad/chef-docker/issues/296
[#297]: https://github.com/bflad/chef-docker/issues/297
[#298]: https://github.com/bflad/chef-docker/issues/298
[#30]: https://github.com/bflad/chef-docker/issues/30
[#31]: https://github.com/bflad/chef-docker/issues/31
[#35]: https://github.com/bflad/chef-docker/issues/35
[#37]: https://github.com/bflad/chef-docker/issues/37
[#38]: https://github.com/bflad/chef-docker/issues/38
[#39]: https://github.com/bflad/chef-docker/issues/39
[#42]: https://github.com/bflad/chef-docker/issues/42
[#43]: https://github.com/bflad/chef-docker/issues/43
[#44]: https://github.com/bflad/chef-docker/issues/44
[#46]: https://github.com/bflad/chef-docker/issues/46
[#47]: https://github.com/bflad/chef-docker/issues/47
[#48]: https://github.com/bflad/chef-docker/issues/48
[#49]: https://github.com/bflad/chef-docker/issues/49
[#51]: https://github.com/bflad/chef-docker/issues/51
[#52]: https://github.com/bflad/chef-docker/issues/52
[#55]: https://github.com/bflad/chef-docker/issues/55
[#56]: https://github.com/bflad/chef-docker/issues/56
[#57]: https://github.com/bflad/chef-docker/issues/57
[#58]: https://github.com/bflad/chef-docker/issues/58
[#59]: https://github.com/bflad/chef-docker/issues/59
[#60]: https://github.com/bflad/chef-docker/issues/60
[#62]: https://github.com/bflad/chef-docker/issues/62
[#63]: https://github.com/bflad/chef-docker/issues/63
[#64]: https://github.com/bflad/chef-docker/issues/64
[#65]: https://github.com/bflad/chef-docker/issues/65
[#67]: https://github.com/bflad/chef-docker/issues/67
[#68]: https://github.com/bflad/chef-docker/issues/68
[#72]: https://github.com/bflad/chef-docker/issues/72
[#77]: https://github.com/bflad/chef-docker/issues/77
[#78]: https://github.com/bflad/chef-docker/issues/78
[#80]: https://github.com/bflad/chef-docker/issues/80
[#81]: https://github.com/bflad/chef-docker/issues/81
[#82]: https://github.com/bflad/chef-docker/issues/82
[#83]: https://github.com/bflad/chef-docker/issues/83
[#84]: https://github.com/bflad/chef-docker/issues/84
[#85]: https://github.com/bflad/chef-docker/issues/85
[#86]: https://github.com/bflad/chef-docker/issues/86
[#88]: https://github.com/bflad/chef-docker/issues/88
[#89]: https://github.com/bflad/chef-docker/issues/89
[#90]: https://github.com/bflad/chef-docker/issues/90
[#91]: https://github.com/bflad/chef-docker/issues/91
[#98]: https://github.com/bflad/chef-docker/issues/98
[@jcrobak]: https://github.com/jcrobak
[@wingrunr21]: https://github.com/wingrunr21

View File

@ -0,0 +1,13 @@
# This gemfile provides additional gems for testing and releasing this cookbook
# It is meant to be installed on top of ChefDK which provides the majority
# of the necessary gems for testing this cookbook
#
# Run 'chef exec bundle install' to install these dependencies
source 'https://rubygems.org'
gem 'berkshelf'
gem 'community_cookbook_releaser'
gem 'kitchen-dokken'
gem 'kitchen-inspec'
gem 'test-kitchen'

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,20 +0,0 @@
<!-- This is a generated file. Please do not edit directly -->
# Maintainers
This file lists how this cookbook project is maintained. When making changes to the system, this file tells you who needs to review your patch - you need a review from an existing maintainer for the cookbook to provide a :+1: on your pull request. Additionally, you need to not receive a veto from a Lieutenant or the Project Lead.
Check out [How Cookbooks are Maintained](https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/CONTRIBUTING.MD) for details on the process and how to become a maintainer or the project lead.
# Project Maintainer
* [Sean OMeara](https://github.com/someara)
# Maintainers
* [Jennifer Davis](https://github.com/sigje)
* [Sean OMeara](https://github.com/someara)
* [Tim Smith](https://github.com/tas50)
* [Thom May](https://github.com/thommay)
* [Anthony Scalisi](https://github.com/scalp42)
* [Chase Bolt](https://github.com/chasebolt)
* [Brian Flad](https://github.com/bflad)
* [Tom Duffield](https://github.com/tduffield)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
Please refer to
https://github.com/chef-cookbooks/community_cookbook_documentation/blob/master/TESTING.MD

View File

@ -0,0 +1,106 @@
# Put files/directories that should be ignored in this file when uploading
# to a chef-server or supermarket.
# Lines that start with '# ' are comments.
# OS generated files #
######################
.DS_Store
Icon?
nohup.out
ehthumbs.db
Thumbs.db
# SASS #
########
.sass-cache
# EDITORS #
###########
\#*
.#*
*~
*.sw[a-z]
*.bak
REVISION
TAGS*
tmtags
*_flymake.*
*_flymake
*.tmproj
.project
.settings
mkmf.log
## COMPILED ##
##############
a.out
*.o
*.pyc
*.so
*.com
*.class
*.dll
*.exe
*/rdoc/
# Testing #
###########
.watchr
.rspec
spec/*
spec/fixtures/*
test/*
features/*
examples/*
Guardfile
Procfile
.kitchen*
.rubocop.yml
spec/*
Rakefile
.travis.yml
.foodcritic
.codeclimate.yml
# SCM #
#######
.git
*/.git
.gitignore
.gitmodules
.gitconfig
.gitattributes
.svn
*/.bzr/*
*/.hg/*
*/.svn/*
# Berkshelf #
#############
Berksfile
Berksfile.lock
cookbooks/*
tmp
# Policyfile #
##############
Policyfile.rb
Policyfile.lock.json
# Cookbooks #
#############
CONTRIBUTING*
CHANGELOG*
TESTING*
# Strainer #
############
Colanderfile
Strainerfile
.colander
.strainer
# Vagrant #
###########
.vagrant
Vagrantfile

View File

@ -1,301 +0,0 @@
PATH
remote: .
specs:
excon (0.54.0)
GEM
remote: http://rubygems.org/
specs:
activesupport (3.2.6)
i18n (~> 0.6)
multi_json (~> 1.0)
backports (3.6.4)
chronic (0.6.7)
delorean (2.0.0)
chronic
diff-lcs (1.2.5)
eventmachine (1.0.4)
eventmachine (1.0.4-java)
ffi2-generators (0.1.1)
formatador (0.2.3)
i18n (0.6.0)
jruby-openssl (0.9.17-java)
json (1.8.2)
json (1.8.2-java)
kgio (2.9.2)
minitest (4.7.5)
multi_json (1.3.6)
open4 (1.3.0)
puma (3.6.0)
puma (3.6.0-java)
rack (1.6.0)
rack-protection (1.2.0)
rack
rack-test (0.6.3)
rack (>= 1.0)
raindrops (0.13.0)
rake (0.9.2.2)
rdoc (3.12)
json (~> 1.4)
rspec (3.5.0)
rspec-core (~> 3.5.0)
rspec-expectations (~> 3.5.0)
rspec-mocks (~> 3.5.0)
rspec-core (3.5.0)
rspec-support (~> 3.5.0)
rspec-expectations (3.5.0)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.5.0)
rspec-mocks (3.5.0)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.5.0)
rspec-support (3.5.0)
rubysl (2.0.14)
rubysl-abbrev (~> 2.0)
rubysl-base64 (~> 2.0)
rubysl-benchmark (~> 2.0)
rubysl-bigdecimal (~> 2.0)
rubysl-cgi (~> 2.0)
rubysl-cgi-session (~> 2.0)
rubysl-cmath (~> 2.0)
rubysl-complex (~> 2.0)
rubysl-continuation (~> 2.0)
rubysl-coverage (~> 2.0)
rubysl-csv (~> 2.0)
rubysl-curses (~> 2.0)
rubysl-date (~> 2.0)
rubysl-delegate (~> 2.0)
rubysl-digest (~> 2.0)
rubysl-drb (~> 2.0)
rubysl-e2mmap (~> 2.0)
rubysl-english (~> 2.0)
rubysl-enumerator (~> 2.0)
rubysl-erb (~> 2.0)
rubysl-etc (~> 2.0)
rubysl-expect (~> 2.0)
rubysl-fcntl (~> 2.0)
rubysl-fiber (~> 2.0)
rubysl-fileutils (~> 2.0)
rubysl-find (~> 2.0)
rubysl-forwardable (~> 2.0)
rubysl-getoptlong (~> 2.0)
rubysl-gserver (~> 2.0)
rubysl-io-console (~> 2.0)
rubysl-io-nonblock (~> 2.0)
rubysl-io-wait (~> 2.0)
rubysl-ipaddr (~> 2.0)
rubysl-irb (~> 2.0)
rubysl-logger (~> 2.0)
rubysl-mathn (~> 2.0)
rubysl-matrix (~> 2.0)
rubysl-mkmf (~> 2.0)
rubysl-monitor (~> 2.0)
rubysl-mutex_m (~> 2.0)
rubysl-net-ftp (~> 2.0)
rubysl-net-http (~> 2.0)
rubysl-net-imap (~> 2.0)
rubysl-net-pop (~> 2.0)
rubysl-net-protocol (~> 2.0)
rubysl-net-smtp (~> 2.0)
rubysl-net-telnet (~> 2.0)
rubysl-nkf (~> 2.0)
rubysl-observer (~> 2.0)
rubysl-open-uri (~> 2.0)
rubysl-open3 (~> 2.0)
rubysl-openssl (~> 2.0)
rubysl-optparse (~> 2.0)
rubysl-ostruct (~> 2.0)
rubysl-pathname (~> 2.0)
rubysl-prettyprint (~> 2.0)
rubysl-prime (~> 2.0)
rubysl-profile (~> 2.0)
rubysl-profiler (~> 2.0)
rubysl-pstore (~> 2.0)
rubysl-pty (~> 2.0)
rubysl-rational (~> 2.0)
rubysl-readline (~> 2.0)
rubysl-resolv (~> 2.0)
rubysl-rexml (~> 2.0)
rubysl-rinda (~> 2.0)
rubysl-rss (~> 2.0)
rubysl-scanf (~> 2.0)
rubysl-securerandom (~> 2.0)
rubysl-set (~> 2.0)
rubysl-shellwords (~> 2.0)
rubysl-singleton (~> 2.0)
rubysl-socket (~> 2.0)
rubysl-stringio (~> 2.0)
rubysl-strscan (~> 2.0)
rubysl-sync (~> 2.0)
rubysl-syslog (~> 2.0)
rubysl-tempfile (~> 2.0)
rubysl-test-unit (~> 2.0)
rubysl-thread (~> 2.0)
rubysl-thwait (~> 2.0)
rubysl-time (~> 2.0)
rubysl-timeout (~> 2.0)
rubysl-tmpdir (~> 2.0)
rubysl-tsort (~> 2.0)
rubysl-un (~> 2.0)
rubysl-uri (~> 2.0)
rubysl-weakref (~> 2.0)
rubysl-webrick (~> 2.0)
rubysl-xmlrpc (~> 2.0)
rubysl-yaml (~> 2.0)
rubysl-zlib (~> 2.0)
rubysl-abbrev (2.0.4)
rubysl-base64 (2.0.0)
rubysl-benchmark (2.0.1)
rubysl-bigdecimal (2.0.2)
rubysl-cgi (2.0.1)
rubysl-cgi-session (2.0.1)
rubysl-cmath (2.0.0)
rubysl-complex (2.0.0)
rubysl-continuation (2.0.0)
rubysl-coverage (2.0.3)
rubysl-csv (2.0.2)
rubysl-english (~> 2.0)
rubysl-curses (2.0.0)
rubysl-date (2.0.6)
rubysl-delegate (2.0.1)
rubysl-digest (2.0.3)
rubysl-drb (2.0.1)
rubysl-e2mmap (2.0.0)
rubysl-english (2.0.0)
rubysl-enumerator (2.0.0)
rubysl-erb (2.0.1)
rubysl-etc (2.0.3)
ffi2-generators (~> 0.1)
rubysl-expect (2.0.0)
rubysl-fcntl (2.0.4)
ffi2-generators (~> 0.1)
rubysl-fiber (2.0.0)
rubysl-fileutils (2.0.3)
rubysl-find (2.0.1)
rubysl-forwardable (2.0.1)
rubysl-getoptlong (2.0.0)
rubysl-gserver (2.0.0)
rubysl-socket (~> 2.0)
rubysl-thread (~> 2.0)
rubysl-io-console (2.0.0)
rubysl-io-nonblock (2.0.0)
rubysl-io-wait (2.0.0)
rubysl-ipaddr (2.0.0)
rubysl-irb (2.0.4)
rubysl-e2mmap (~> 2.0)
rubysl-mathn (~> 2.0)
rubysl-readline (~> 2.0)
rubysl-thread (~> 2.0)
rubysl-logger (2.0.0)
rubysl-mathn (2.0.0)
rubysl-matrix (2.1.0)
rubysl-e2mmap (~> 2.0)
rubysl-mkmf (2.0.1)
rubysl-fileutils (~> 2.0)
rubysl-shellwords (~> 2.0)
rubysl-monitor (2.0.0)
rubysl-mutex_m (2.0.0)
rubysl-net-ftp (2.0.1)
rubysl-net-http (2.0.4)
rubysl-cgi (~> 2.0)
rubysl-erb (~> 2.0)
rubysl-singleton (~> 2.0)
rubysl-net-imap (2.0.1)
rubysl-net-pop (2.0.1)
rubysl-net-protocol (2.0.1)
rubysl-net-smtp (2.0.1)
rubysl-net-telnet (2.0.0)
rubysl-nkf (2.0.1)
rubysl-observer (2.0.0)
rubysl-open-uri (2.0.0)
rubysl-open3 (2.0.0)
rubysl-openssl (2.0.4)
rubysl-optparse (2.0.1)
rubysl-shellwords (~> 2.0)
rubysl-ostruct (2.0.4)
rubysl-pathname (2.0.0)
rubysl-prettyprint (2.0.2)
rubysl-prime (2.0.0)
rubysl-profile (2.0.0)
rubysl-profiler (2.0.1)
rubysl-pstore (2.0.0)
rubysl-pty (2.0.2)
rubysl-rational (2.0.1)
rubysl-readline (2.0.2)
rubysl-resolv (2.0.0)
rubysl-rexml (2.0.2)
rubysl-rinda (2.0.0)
rubysl-rss (2.0.0)
rubysl-scanf (2.0.0)
rubysl-securerandom (2.0.0)
rubysl-set (2.0.1)
rubysl-shellwords (2.0.0)
rubysl-singleton (2.0.0)
rubysl-socket (2.0.1)
rubysl-stringio (2.0.0)
rubysl-strscan (2.0.0)
rubysl-sync (2.0.0)
rubysl-syslog (2.0.1)
ffi2-generators (~> 0.1)
rubysl-tempfile (2.0.1)
rubysl-test-unit (2.0.1)
minitest (~> 4.7)
rubysl-thread (2.0.2)
rubysl-thwait (2.0.0)
rubysl-time (2.0.3)
rubysl-timeout (2.0.0)
rubysl-tmpdir (2.0.0)
rubysl-tsort (2.0.1)
rubysl-un (2.0.0)
rubysl-fileutils (~> 2.0)
rubysl-optparse (~> 2.0)
rubysl-uri (2.0.0)
rubysl-weakref (2.0.0)
rubysl-webrick (2.0.0)
rubysl-xmlrpc (2.0.0)
rubysl-yaml (2.0.3)
rubysl-zlib (2.0.1)
shindo (0.3.4)
formatador (>= 0.1.1)
sinatra (1.3.2)
rack (~> 1.3, >= 1.3.6)
rack-protection (~> 1.2)
tilt (~> 1.3, >= 1.3.3)
sinatra-contrib (1.3.2)
backports (>= 2.0)
eventmachine
rack-protection
rack-test
sinatra (~> 1.3.0)
tilt (~> 1.3)
tilt (1.3.3)
unicorn (4.8.3)
kgio (~> 2.6)
rack
raindrops (~> 0.7)
PLATFORMS
java
ruby
DEPENDENCIES
activesupport
delorean
eventmachine (>= 1.0.4)
excon!
jruby-openssl (~> 0.9)
json (>= 1.8.2)
open4
puma
rack (~> 1.6)
rake
rdoc
rspec (>= 3.5.0)
rubysl (~> 2.0)
shindo
sinatra
sinatra-contrib
unicorn
BUNDLED WITH
1.13.1

View File

@ -1,196 +0,0 @@
SHELL = /bin/sh
# V=0 quiet, V=1 verbose. other values don't work.
V = 0
Q1 = $(V:1=)
Q = $(Q1:0=@)
ECHO1 = $(V:1=@:)
ECHO = $(ECHO1:0=@echo)
NULLCMD = :
#### Start of system configuration section. ####
srcdir = .
topdir = /opt/chefdk/embedded/include/ruby-2.3.0
hdrdir = $(topdir)
arch_hdrdir = /opt/chefdk/embedded/include/ruby-2.3.0/x86_64-darwin13
PATH_SEPARATOR = :
VPATH = $(srcdir):$(arch_hdrdir)/ruby:$(hdrdir)/ruby
prefix = $(DESTDIR)/opt/chefdk/embedded
rubysitearchprefix = $(rubylibprefix)/$(sitearch)
rubyarchprefix = $(rubylibprefix)/$(arch)
rubylibprefix = $(libdir)/$(RUBY_BASE_NAME)
exec_prefix = $(prefix)
vendorarchhdrdir = $(vendorhdrdir)/$(sitearch)
sitearchhdrdir = $(sitehdrdir)/$(sitearch)
rubyarchhdrdir = $(rubyhdrdir)/$(arch)
vendorhdrdir = $(rubyhdrdir)/vendor_ruby
sitehdrdir = $(rubyhdrdir)/site_ruby
rubyhdrdir = $(includedir)/$(RUBY_VERSION_NAME)
vendorarchdir = $(vendorlibdir)/$(sitearch)
vendorlibdir = $(vendordir)/$(ruby_version)
vendordir = $(rubylibprefix)/vendor_ruby
sitearchdir = $(DESTDIR)./.gem.20170131-68034-1687ce7
sitelibdir = $(DESTDIR)./.gem.20170131-68034-1687ce7
sitedir = $(rubylibprefix)/site_ruby
rubyarchdir = $(rubylibdir)/$(arch)
rubylibdir = $(rubylibprefix)/$(ruby_version)
sitearchincludedir = $(includedir)/$(sitearch)
archincludedir = $(includedir)/$(arch)
sitearchlibdir = $(libdir)/$(sitearch)
archlibdir = $(libdir)/$(arch)
ridir = $(datarootdir)/$(RI_BASE_NAME)
mandir = $(datarootdir)/man
localedir = $(datarootdir)/locale
libdir = $(exec_prefix)/lib
psdir = $(docdir)
pdfdir = $(docdir)
dvidir = $(docdir)
htmldir = $(docdir)
infodir = $(datarootdir)/info
docdir = $(datarootdir)/doc/$(PACKAGE)
oldincludedir = $(DESTDIR)/usr/include
includedir = $(SDKROOT)$(prefix)/include
localstatedir = $(prefix)/var
sharedstatedir = $(prefix)/com
sysconfdir = $(prefix)/etc
datadir = $(datarootdir)
datarootdir = $(prefix)/share
libexecdir = $(exec_prefix)/libexec
sbindir = $(exec_prefix)/sbin
bindir = $(exec_prefix)/bin
archdir = $(rubyarchdir)
CC = clang
CXX = g++
LIBRUBY = $(LIBRUBY_SO)
LIBRUBY_A = lib$(RUBY_SO_NAME)-static.a
LIBRUBYARG_SHARED = -l$(RUBY_SO_NAME)
LIBRUBYARG_STATIC = -l$(RUBY_SO_NAME)-static -framework CoreFoundation
empty =
OUTFLAG = -o $(empty)
COUTFLAG = -o $(empty)
RUBY_EXTCONF_H =
cflags = $(optflags) $(debugflags) $(warnflags)
cxxflags = $(optflags) $(debugflags) $(warnflags)
optflags = -O3 -fno-fast-math
debugflags = -ggdb3
warnflags = -Wall -Wextra -Wno-unused-parameter -Wno-parentheses -Wno-long-long -diag-disable=2259 -Wno-missing-field-initializers -Wunused-variable -Wpointer-arith -Wwrite-strings -Wdeclaration-after-statement -Wshorten-64-to-32 -Wimplicit-function-declaration -Wdivision-by-zero -Wdeprecated-declarations -Wextra-tokens
CCDLFLAGS = -fno-common
CFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments -fno-common $(ARCH_FLAG)
INCFLAGS = -I. -I$(arch_hdrdir) -I$(hdrdir)/ruby/backward -I$(hdrdir) -I$(srcdir)
DEFS =
CPPFLAGS = -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -arch x86_64 -m64 -O3 -g -pipe -Qunused-arguments -I/opt/chefdk/embedded/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_UNLIMITED_SELECT -D_REENTRANT $(DEFS) $(cppflags)
CXXFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments $(ARCH_FLAG)
ldflags = -L. -L/opt/chefdk/embedded/lib -fstack-protector -L/opt/chefdk/embedded/lib
dldflags = -Wl,-undefined,dynamic_lookup -Wl,-multiply_defined,suppress -L/opt/chefdk/embedded/lib
ARCH_FLAG = -m64
DLDFLAGS = $(ldflags) $(dldflags) $(ARCH_FLAG)
LDSHARED = $(CC) -dynamic -bundle
LDSHAREDXX = $(CXX) -dynamic -bundle
AR = ar
EXEEXT =
RUBY_INSTALL_NAME = $(RUBY_BASE_NAME)
RUBY_SO_NAME = ruby.2.3.0
RUBYW_INSTALL_NAME =
RUBY_VERSION_NAME = $(RUBY_BASE_NAME)-$(ruby_version)
RUBYW_BASE_NAME = rubyw
RUBY_BASE_NAME = ruby
arch = x86_64-darwin13
sitearch = $(arch)
ruby_version = 2.3.0
ruby = $(bindir)/$(RUBY_BASE_NAME)
RUBY = $(ruby)
ruby_headers = $(hdrdir)/ruby.h $(hdrdir)/ruby/ruby.h $(hdrdir)/ruby/defines.h $(hdrdir)/ruby/missing.h $(hdrdir)/ruby/intern.h $(hdrdir)/ruby/st.h $(hdrdir)/ruby/subst.h $(arch_hdrdir)/ruby/config.h
RM = rm -f
RM_RF = $(RUBY) -run -e rm -- -rf
RMDIRS = rmdir -p
MAKEDIRS = mkdir -p
INSTALL = /usr/bin/install -c
INSTALL_PROG = $(INSTALL) -m 0755
INSTALL_DATA = $(INSTALL) -m 644
COPY = cp
TOUCH = exit >
#### End of system configuration section. ####
preload =
libpath = . $(libdir) /opt/chefdk/embedded/lib
LIBPATH = -L. -L$(libdir) -L/opt/chefdk/embedded/lib
DEFFILE =
CLEANFILES = mkmf.log
DISTCLEANFILES =
DISTCLEANDIRS =
extout =
extout_prefix =
target_prefix =
LOCAL_LIBS =
LIBS = $(LIBRUBYARG_SHARED) -lpthread -ldl -lobjc
ORIG_SRCS =
SRCS = $(ORIG_SRCS)
OBJS =
HDRS =
TARGET =
TARGET_NAME =
TARGET_ENTRY = Init_$(TARGET_NAME)
DLLIB =
EXTSTATIC =
STATIC_LIB =
TIMESTAMP_DIR = .
BINDIR = $(bindir)
RUBYCOMMONDIR = $(sitedir)$(target_prefix)
RUBYLIBDIR = $(sitelibdir)$(target_prefix)
RUBYARCHDIR = $(sitearchdir)$(target_prefix)
HDRDIR = $(rubyhdrdir)/ruby$(target_prefix)
ARCHHDRDIR = $(rubyhdrdir)/$(arch)/ruby$(target_prefix)
TARGET_SO = $(DLLIB)
CLEANLIBS = $(TARGET).bundle
CLEANOBJS = *.o *.bak
all: Makefile
static: $(STATIC_LIB) install-rb
.PHONY: all install static install-so install-rb
.PHONY: clean clean-so clean-static clean-rb
clean-static::
clean-rb-default::
clean-rb::
clean-so::
clean: clean-so clean-static clean-rb-default clean-rb
-$(Q)$(RM) $(CLEANLIBS) $(CLEANOBJS) $(CLEANFILES) .*.time
distclean-rb-default::
distclean-rb::
distclean-so::
distclean-static::
distclean: clean distclean-so distclean-static distclean-rb-default distclean-rb
-$(Q)$(RM) Makefile $(RUBY_EXTCONF_H) conftest.* mkmf.log
-$(Q)$(RM) core ruby$(EXEEXT) *~ $(DISTCLEANFILES)
-$(Q)$(RMDIRS) $(DISTCLEANDIRS) 2> /dev/null || true
realclean: distclean
install: install-so install-rb
install-so: Makefile
install-rb: pre-install-rb install-rb-default
install-rb-default: pre-install-rb-default
pre-install-rb: Makefile
pre-install-rb-default: Makefile
pre-install-rb-default:
@$(NULLCMD)
site-install: site-install-so site-install-rb
site-install-so: install-so
site-install-rb: install-rb

View File

@ -1,262 +0,0 @@
SHELL = /bin/sh
# V=0 quiet, V=1 verbose. other values don't work.
V = 0
Q1 = $(V:1=)
Q = $(Q1:0=@)
ECHO1 = $(V:1=@:)
ECHO = $(ECHO1:0=@echo)
NULLCMD = :
#### Start of system configuration section. ####
srcdir = .
topdir = /opt/chefdk/embedded/include/ruby-2.3.0
hdrdir = $(topdir)
arch_hdrdir = /opt/chefdk/embedded/include/ruby-2.3.0/x86_64-darwin13
PATH_SEPARATOR = :
VPATH = $(srcdir):$(arch_hdrdir)/ruby:$(hdrdir)/ruby
prefix = $(DESTDIR)/opt/chefdk/embedded
rubysitearchprefix = $(rubylibprefix)/$(sitearch)
rubyarchprefix = $(rubylibprefix)/$(arch)
rubylibprefix = $(libdir)/$(RUBY_BASE_NAME)
exec_prefix = $(prefix)
vendorarchhdrdir = $(vendorhdrdir)/$(sitearch)
sitearchhdrdir = $(sitehdrdir)/$(sitearch)
rubyarchhdrdir = $(rubyhdrdir)/$(arch)
vendorhdrdir = $(rubyhdrdir)/vendor_ruby
sitehdrdir = $(rubyhdrdir)/site_ruby
rubyhdrdir = $(includedir)/$(RUBY_VERSION_NAME)
vendorarchdir = $(vendorlibdir)/$(sitearch)
vendorlibdir = $(vendordir)/$(ruby_version)
vendordir = $(rubylibprefix)/vendor_ruby
sitearchdir = $(DESTDIR)./.gem.20170131-68034-12q3abg
sitelibdir = $(DESTDIR)./.gem.20170131-68034-12q3abg
sitedir = $(rubylibprefix)/site_ruby
rubyarchdir = $(rubylibdir)/$(arch)
rubylibdir = $(rubylibprefix)/$(ruby_version)
sitearchincludedir = $(includedir)/$(sitearch)
archincludedir = $(includedir)/$(arch)
sitearchlibdir = $(libdir)/$(sitearch)
archlibdir = $(libdir)/$(arch)
ridir = $(datarootdir)/$(RI_BASE_NAME)
mandir = $(datarootdir)/man
localedir = $(datarootdir)/locale
libdir = $(exec_prefix)/lib
psdir = $(docdir)
pdfdir = $(docdir)
dvidir = $(docdir)
htmldir = $(docdir)
infodir = $(datarootdir)/info
docdir = $(datarootdir)/doc/$(PACKAGE)
oldincludedir = $(DESTDIR)/usr/include
includedir = $(SDKROOT)$(prefix)/include
localstatedir = $(prefix)/var
sharedstatedir = $(prefix)/com
sysconfdir = $(prefix)/etc
datadir = $(datarootdir)
datarootdir = $(prefix)/share
libexecdir = $(exec_prefix)/libexec
sbindir = $(exec_prefix)/sbin
bindir = $(exec_prefix)/bin
archdir = $(rubyarchdir)
CC = clang
CXX = g++
LIBRUBY = $(LIBRUBY_SO)
LIBRUBY_A = lib$(RUBY_SO_NAME)-static.a
LIBRUBYARG_SHARED = -l$(RUBY_SO_NAME)
LIBRUBYARG_STATIC = -l$(RUBY_SO_NAME)-static -framework CoreFoundation
empty =
OUTFLAG = -o $(empty)
COUTFLAG = -o $(empty)
RUBY_EXTCONF_H =
cflags = $(optflags) $(debugflags) $(warnflags)
cxxflags = $(optflags) $(debugflags) $(warnflags)
optflags = -O3 -fno-fast-math
debugflags = -ggdb3
warnflags = -Wall -Wextra -Wno-unused-parameter -Wno-parentheses -Wno-long-long -diag-disable=2259 -Wno-missing-field-initializers -Wunused-variable -Wpointer-arith -Wwrite-strings -Wdeclaration-after-statement -Wshorten-64-to-32 -Wimplicit-function-declaration -Wdivision-by-zero -Wdeprecated-declarations -Wextra-tokens
CCDLFLAGS = -fno-common
CFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments -fno-common $(ARCH_FLAG)
INCFLAGS = -I. -I$(arch_hdrdir) -I$(hdrdir)/ruby/backward -I$(hdrdir) -I$(srcdir)
DEFS =
CPPFLAGS = -DJSON_GENERATOR -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -arch x86_64 -m64 -O3 -g -pipe -Qunused-arguments -I/opt/chefdk/embedded/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_UNLIMITED_SELECT -D_REENTRANT $(DEFS) $(cppflags)
CXXFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments $(ARCH_FLAG)
ldflags = -L. -L/opt/chefdk/embedded/lib -fstack-protector -L/opt/chefdk/embedded/lib
dldflags = -Wl,-undefined,dynamic_lookup -Wl,-multiply_defined,suppress -L/opt/chefdk/embedded/lib
ARCH_FLAG = -m64
DLDFLAGS = $(ldflags) $(dldflags) $(ARCH_FLAG)
LDSHARED = $(CC) -dynamic -bundle
LDSHAREDXX = $(CXX) -dynamic -bundle
AR = ar
EXEEXT =
RUBY_INSTALL_NAME = $(RUBY_BASE_NAME)
RUBY_SO_NAME = ruby.2.3.0
RUBYW_INSTALL_NAME =
RUBY_VERSION_NAME = $(RUBY_BASE_NAME)-$(ruby_version)
RUBYW_BASE_NAME = rubyw
RUBY_BASE_NAME = ruby
arch = x86_64-darwin13
sitearch = $(arch)
ruby_version = 2.3.0
ruby = $(bindir)/$(RUBY_BASE_NAME)
RUBY = $(ruby)
ruby_headers = $(hdrdir)/ruby.h $(hdrdir)/ruby/ruby.h $(hdrdir)/ruby/defines.h $(hdrdir)/ruby/missing.h $(hdrdir)/ruby/intern.h $(hdrdir)/ruby/st.h $(hdrdir)/ruby/subst.h $(arch_hdrdir)/ruby/config.h
RM = rm -f
RM_RF = $(RUBY) -run -e rm -- -rf
RMDIRS = rmdir -p
MAKEDIRS = mkdir -p
INSTALL = /usr/bin/install -c
INSTALL_PROG = $(INSTALL) -m 0755
INSTALL_DATA = $(INSTALL) -m 644
COPY = cp
TOUCH = exit >
#### End of system configuration section. ####
preload =
libpath = . $(libdir) /opt/chefdk/embedded/lib
LIBPATH = -L. -L$(libdir) -L/opt/chefdk/embedded/lib
DEFFILE =
CLEANFILES = mkmf.log
DISTCLEANFILES =
DISTCLEANDIRS =
extout =
extout_prefix =
target_prefix = /json/ext
LOCAL_LIBS =
LIBS = $(LIBRUBYARG_SHARED) -lpthread -ldl -lobjc
ORIG_SRCS = generator.c
SRCS = $(ORIG_SRCS)
OBJS = generator.o
HDRS = $(srcdir)/generator.h
TARGET = generator
TARGET_NAME = generator
TARGET_ENTRY = Init_$(TARGET_NAME)
DLLIB = $(TARGET).bundle
EXTSTATIC =
STATIC_LIB =
TIMESTAMP_DIR = .
BINDIR = $(bindir)
RUBYCOMMONDIR = $(sitedir)$(target_prefix)
RUBYLIBDIR = $(sitelibdir)$(target_prefix)
RUBYARCHDIR = $(sitearchdir)$(target_prefix)
HDRDIR = $(rubyhdrdir)/ruby$(target_prefix)
ARCHHDRDIR = $(rubyhdrdir)/$(arch)/ruby$(target_prefix)
TARGET_SO = $(DLLIB)
CLEANLIBS = $(TARGET).bundle
CLEANOBJS = *.o *.bak
all: $(DLLIB)
static: $(STATIC_LIB) install-rb
.PHONY: all install static install-so install-rb
.PHONY: clean clean-so clean-static clean-rb
clean-static::
clean-rb-default::
clean-rb::
clean-so::
clean: clean-so clean-static clean-rb-default clean-rb
-$(Q)$(RM) $(CLEANLIBS) $(CLEANOBJS) $(CLEANFILES) .*.time
distclean-rb-default::
distclean-rb::
distclean-so::
distclean-static::
distclean: clean distclean-so distclean-static distclean-rb-default distclean-rb
-$(Q)$(RM) Makefile $(RUBY_EXTCONF_H) conftest.* mkmf.log
-$(Q)$(RM) core ruby$(EXEEXT) *~ $(DISTCLEANFILES)
-$(Q)$(RMDIRS) $(DISTCLEANDIRS) 2> /dev/null || true
realclean: distclean
install: install-so install-rb
install-so: $(DLLIB) $(TIMESTAMP_DIR)/.RUBYARCHDIR.-.json.-.ext.time
$(INSTALL_PROG) $(DLLIB) $(RUBYARCHDIR)
clean-static::
-$(Q)$(RM) $(STATIC_LIB)
install-rb: pre-install-rb install-rb-default
install-rb-default: pre-install-rb-default
pre-install-rb: Makefile
pre-install-rb-default: Makefile
pre-install-rb-default:
@$(NULLCMD)
$(TIMESTAMP_DIR)/.RUBYARCHDIR.-.json.-.ext.time:
$(Q) $(MAKEDIRS) $(@D) $(RUBYARCHDIR)
$(Q) $(TOUCH) $@
site-install: site-install-so site-install-rb
site-install-so: install-so
site-install-rb: install-rb
.SUFFIXES: .c .m .cc .mm .cxx .cpp .o .S
.cc.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cc.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.mm.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.mm.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.cxx.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cxx.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.cpp.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cpp.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.c.o:
$(ECHO) compiling $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $<
.c.S:
$(ECHO) translating $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -S $<
.m.o:
$(ECHO) compiling $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $<
.m.S:
$(ECHO) translating $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -S $<
$(DLLIB): $(OBJS) Makefile
$(ECHO) linking shared-object json/ext/$(DLLIB)
-$(Q)$(RM) $(@)
$(Q) $(LDSHARED) -o $@ $(OBJS) $(LIBPATH) $(DLDFLAGS) $(LOCAL_LIBS) $(LIBS)
$(Q) $(POSTLINK)
###
generator.o: generator.c generator.h $(srcdir)/../fbuffer/fbuffer.h

View File

@ -1,262 +0,0 @@
SHELL = /bin/sh
# V=0 quiet, V=1 verbose. other values don't work.
V = 0
Q1 = $(V:1=)
Q = $(Q1:0=@)
ECHO1 = $(V:1=@:)
ECHO = $(ECHO1:0=@echo)
NULLCMD = :
#### Start of system configuration section. ####
srcdir = .
topdir = /opt/chefdk/embedded/include/ruby-2.3.0
hdrdir = $(topdir)
arch_hdrdir = /opt/chefdk/embedded/include/ruby-2.3.0/x86_64-darwin13
PATH_SEPARATOR = :
VPATH = $(srcdir):$(arch_hdrdir)/ruby:$(hdrdir)/ruby
prefix = $(DESTDIR)/opt/chefdk/embedded
rubysitearchprefix = $(rubylibprefix)/$(sitearch)
rubyarchprefix = $(rubylibprefix)/$(arch)
rubylibprefix = $(libdir)/$(RUBY_BASE_NAME)
exec_prefix = $(prefix)
vendorarchhdrdir = $(vendorhdrdir)/$(sitearch)
sitearchhdrdir = $(sitehdrdir)/$(sitearch)
rubyarchhdrdir = $(rubyhdrdir)/$(arch)
vendorhdrdir = $(rubyhdrdir)/vendor_ruby
sitehdrdir = $(rubyhdrdir)/site_ruby
rubyhdrdir = $(includedir)/$(RUBY_VERSION_NAME)
vendorarchdir = $(vendorlibdir)/$(sitearch)
vendorlibdir = $(vendordir)/$(ruby_version)
vendordir = $(rubylibprefix)/vendor_ruby
sitearchdir = $(DESTDIR)./.gem.20170131-68034-1leg24r
sitelibdir = $(DESTDIR)./.gem.20170131-68034-1leg24r
sitedir = $(rubylibprefix)/site_ruby
rubyarchdir = $(rubylibdir)/$(arch)
rubylibdir = $(rubylibprefix)/$(ruby_version)
sitearchincludedir = $(includedir)/$(sitearch)
archincludedir = $(includedir)/$(arch)
sitearchlibdir = $(libdir)/$(sitearch)
archlibdir = $(libdir)/$(arch)
ridir = $(datarootdir)/$(RI_BASE_NAME)
mandir = $(datarootdir)/man
localedir = $(datarootdir)/locale
libdir = $(exec_prefix)/lib
psdir = $(docdir)
pdfdir = $(docdir)
dvidir = $(docdir)
htmldir = $(docdir)
infodir = $(datarootdir)/info
docdir = $(datarootdir)/doc/$(PACKAGE)
oldincludedir = $(DESTDIR)/usr/include
includedir = $(SDKROOT)$(prefix)/include
localstatedir = $(prefix)/var
sharedstatedir = $(prefix)/com
sysconfdir = $(prefix)/etc
datadir = $(datarootdir)
datarootdir = $(prefix)/share
libexecdir = $(exec_prefix)/libexec
sbindir = $(exec_prefix)/sbin
bindir = $(exec_prefix)/bin
archdir = $(rubyarchdir)
CC = clang
CXX = g++
LIBRUBY = $(LIBRUBY_SO)
LIBRUBY_A = lib$(RUBY_SO_NAME)-static.a
LIBRUBYARG_SHARED = -l$(RUBY_SO_NAME)
LIBRUBYARG_STATIC = -l$(RUBY_SO_NAME)-static -framework CoreFoundation
empty =
OUTFLAG = -o $(empty)
COUTFLAG = -o $(empty)
RUBY_EXTCONF_H =
cflags = $(optflags) $(debugflags) $(warnflags)
cxxflags = $(optflags) $(debugflags) $(warnflags)
optflags = -O3 -fno-fast-math
debugflags = -ggdb3
warnflags = -Wall -Wextra -Wno-unused-parameter -Wno-parentheses -Wno-long-long -diag-disable=2259 -Wno-missing-field-initializers -Wunused-variable -Wpointer-arith -Wwrite-strings -Wdeclaration-after-statement -Wshorten-64-to-32 -Wimplicit-function-declaration -Wdivision-by-zero -Wdeprecated-declarations -Wextra-tokens
CCDLFLAGS = -fno-common
CFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments -fno-common $(ARCH_FLAG)
INCFLAGS = -I. -I$(arch_hdrdir) -I$(hdrdir)/ruby/backward -I$(hdrdir) -I$(srcdir)
DEFS =
CPPFLAGS = -DHAVE_RB_ENC_RAISE -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -arch x86_64 -m64 -O3 -g -pipe -Qunused-arguments -I/opt/chefdk/embedded/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_UNLIMITED_SELECT -D_REENTRANT $(DEFS) $(cppflags)
CXXFLAGS = $(CCDLFLAGS) -I/opt/chefdk/embedded/include -O2 -I/opt/chefdk/embedded/include/ncurses -O3 -g -pipe -Qunused-arguments $(ARCH_FLAG)
ldflags = -L. -L/opt/chefdk/embedded/lib -fstack-protector -L/opt/chefdk/embedded/lib
dldflags = -Wl,-undefined,dynamic_lookup -Wl,-multiply_defined,suppress -L/opt/chefdk/embedded/lib
ARCH_FLAG = -m64
DLDFLAGS = $(ldflags) $(dldflags) $(ARCH_FLAG)
LDSHARED = $(CC) -dynamic -bundle
LDSHAREDXX = $(CXX) -dynamic -bundle
AR = ar
EXEEXT =
RUBY_INSTALL_NAME = $(RUBY_BASE_NAME)
RUBY_SO_NAME = ruby.2.3.0
RUBYW_INSTALL_NAME =
RUBY_VERSION_NAME = $(RUBY_BASE_NAME)-$(ruby_version)
RUBYW_BASE_NAME = rubyw
RUBY_BASE_NAME = ruby
arch = x86_64-darwin13
sitearch = $(arch)
ruby_version = 2.3.0
ruby = $(bindir)/$(RUBY_BASE_NAME)
RUBY = $(ruby)
ruby_headers = $(hdrdir)/ruby.h $(hdrdir)/ruby/ruby.h $(hdrdir)/ruby/defines.h $(hdrdir)/ruby/missing.h $(hdrdir)/ruby/intern.h $(hdrdir)/ruby/st.h $(hdrdir)/ruby/subst.h $(arch_hdrdir)/ruby/config.h
RM = rm -f
RM_RF = $(RUBY) -run -e rm -- -rf
RMDIRS = rmdir -p
MAKEDIRS = mkdir -p
INSTALL = /usr/bin/install -c
INSTALL_PROG = $(INSTALL) -m 0755
INSTALL_DATA = $(INSTALL) -m 644
COPY = cp
TOUCH = exit >
#### End of system configuration section. ####
preload =
libpath = . $(libdir) /opt/chefdk/embedded/lib
LIBPATH = -L. -L$(libdir) -L/opt/chefdk/embedded/lib
DEFFILE =
CLEANFILES = mkmf.log
DISTCLEANFILES =
DISTCLEANDIRS =
extout =
extout_prefix =
target_prefix = /json/ext
LOCAL_LIBS =
LIBS = $(LIBRUBYARG_SHARED) -lpthread -ldl -lobjc
ORIG_SRCS = parser.c
SRCS = $(ORIG_SRCS)
OBJS = parser.o
HDRS = $(srcdir)/parser.h
TARGET = parser
TARGET_NAME = parser
TARGET_ENTRY = Init_$(TARGET_NAME)
DLLIB = $(TARGET).bundle
EXTSTATIC =
STATIC_LIB =
TIMESTAMP_DIR = .
BINDIR = $(bindir)
RUBYCOMMONDIR = $(sitedir)$(target_prefix)
RUBYLIBDIR = $(sitelibdir)$(target_prefix)
RUBYARCHDIR = $(sitearchdir)$(target_prefix)
HDRDIR = $(rubyhdrdir)/ruby$(target_prefix)
ARCHHDRDIR = $(rubyhdrdir)/$(arch)/ruby$(target_prefix)
TARGET_SO = $(DLLIB)
CLEANLIBS = $(TARGET).bundle
CLEANOBJS = *.o *.bak
all: $(DLLIB)
static: $(STATIC_LIB) install-rb
.PHONY: all install static install-so install-rb
.PHONY: clean clean-so clean-static clean-rb
clean-static::
clean-rb-default::
clean-rb::
clean-so::
clean: clean-so clean-static clean-rb-default clean-rb
-$(Q)$(RM) $(CLEANLIBS) $(CLEANOBJS) $(CLEANFILES) .*.time
distclean-rb-default::
distclean-rb::
distclean-so::
distclean-static::
distclean: clean distclean-so distclean-static distclean-rb-default distclean-rb
-$(Q)$(RM) Makefile $(RUBY_EXTCONF_H) conftest.* mkmf.log
-$(Q)$(RM) core ruby$(EXEEXT) *~ $(DISTCLEANFILES)
-$(Q)$(RMDIRS) $(DISTCLEANDIRS) 2> /dev/null || true
realclean: distclean
install: install-so install-rb
install-so: $(DLLIB) $(TIMESTAMP_DIR)/.RUBYARCHDIR.-.json.-.ext.time
$(INSTALL_PROG) $(DLLIB) $(RUBYARCHDIR)
clean-static::
-$(Q)$(RM) $(STATIC_LIB)
install-rb: pre-install-rb install-rb-default
install-rb-default: pre-install-rb-default
pre-install-rb: Makefile
pre-install-rb-default: Makefile
pre-install-rb-default:
@$(NULLCMD)
$(TIMESTAMP_DIR)/.RUBYARCHDIR.-.json.-.ext.time:
$(Q) $(MAKEDIRS) $(@D) $(RUBYARCHDIR)
$(Q) $(TOUCH) $@
site-install: site-install-so site-install-rb
site-install-so: install-so
site-install-rb: install-rb
.SUFFIXES: .c .m .cc .mm .cxx .cpp .o .S
.cc.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cc.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.mm.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.mm.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.cxx.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cxx.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.cpp.o:
$(ECHO) compiling $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $<
.cpp.S:
$(ECHO) translating $(<)
$(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -S $<
.c.o:
$(ECHO) compiling $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $<
.c.S:
$(ECHO) translating $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -S $<
.m.o:
$(ECHO) compiling $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $<
.m.S:
$(ECHO) translating $(<)
$(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -S $<
$(DLLIB): $(OBJS) Makefile
$(ECHO) linking shared-object json/ext/$(DLLIB)
-$(Q)$(RM) $(@)
$(Q) $(LDSHARED) -o $@ $(OBJS) $(LIBPATH) $(DLDFLAGS) $(LOCAL_LIBS) $(LIBS)
$(Q) $(POSTLINK)
###
parser.o: parser.c parser.h $(srcdir)/../fbuffer/fbuffer.h

View File

@ -0,0 +1,175 @@
---
driver:
name: dokken
chef_version: latest
privileged: true
volumes: [
'/var/lib/docker', '/var/lib/docker-one', '/var/lib/docker-two'
]
transport:
name: dokken
provisioner:
name: dokken
deprecations_as_errors: true
verifier:
name: inspec
platforms:
- name: amazonlinux
driver:
image: dokken/amazonlinux
pid_one_command: /sbin/init
- name: amazonlinux-2
driver:
image: dokken/amazonlinux-2
pid_one_command: /usr/lib/systemd/systemd
- name: debian-8
driver:
image: dokken/debian-8
pid_one_command: /bin/systemd
- name: debian-9
driver:
image: dokken/debian-9
pid_one_command: /bin/systemd
- name: centos-7
driver:
image: dokken/centos-7
pid_one_command: /usr/lib/systemd/systemd
- name: fedora-28
driver:
image: dokken/fedora-28
pid_one_command: /usr/lib/systemd/systemd
- name: ubuntu-16.04
driver:
image: dokken/ubuntu-16.04
pid_one_command: /bin/systemd
- name: ubuntu-18.04
driver:
image: dokken/ubuntu-18.04
pid_one_command: /bin/systemd
suites:
###############################
# docker_installation resources
###############################
- name: installation_script_main
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'main'
run_list:
- recipe[docker_test::installation_script]
- name: installation_script_test
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'test'
run_list:
- recipe[docker_test::installation_script]
- name: installation_script_experimental
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
attributes:
docker:
repo: 'experimental'
run_list:
- recipe[docker_test::installation_script]
- name: installation_package
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::installation_package]
- name: installation_tarball
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::installation_tarball]
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
##################
# resource testing
##################
- name: resources
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::image]
- recipe[docker_test::container]
- recipe[docker_test::exec]
- recipe[docker_test::plugin]
- name: network
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::network]
- name: volume
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::volume]
- name: registry
includes: [
'ubuntu-16.04',
]
attributes:
docker:
version: '18.06.0'
run_list:
- recipe[docker_test::default]
- recipe[docker_test::registry]
#############################
# quick service smoke testing
#############################
- name: smoke
includes: [
'ubuntu-16.04',
'ubuntu-18.04'
]
run_list:
- recipe[docker_test::smoke]

View File

@ -1,13 +0,0 @@
begin
gem 'docker-api', '= 1.33.2'
rescue LoadError
unless defined?(ChefSpec)
run_context = Chef::RunContext.new(Chef::Node.new, {}, Chef::EventDispatch::Dispatcher.new)
require 'chef/resource/chef_gem'
docker = Chef::Resource::ChefGem.new('docker-api', run_context)
docker.version '= 1.33.2'
docker.run_action(:install)
end
end

View File

@ -1,9 +1,46 @@
module DockerCookbook
class DockerBase < Chef::Resource
require_relative 'helpers_auth'
require_relative 'helpers_base'
require 'docker'
require 'shellwords'
include DockerHelpers::Base
################
# Helper methods
################
def connection
@connection ||= begin
opts = {}
opts[:read_timeout] = read_timeout if read_timeout
opts[:write_timeout] = write_timeout if write_timeout
if host =~ /^tcp:/
opts[:scheme] = 'https' if tls || !tls_verify.nil?
opts[:ssl_ca_file] = tls_ca_cert if tls_ca_cert
opts[:client_cert] = tls_client_cert if tls_client_cert
opts[:client_key] = tls_client_key if tls_client_key
end
Docker::Connection.new(host || Docker.url, opts)
end
end
def with_retries(&_block)
tries = api_retries
begin
yield
# Only catch errors that can be fixed with retries.
rescue Docker::Error::ServerError, # 500
Docker::Error::UnexpectedResponseError, # 400
Docker::Error::TimeoutError,
Docker::Error::IOError
tries -= 1
retry if tries > 0
raise
end
end
def call_action(_action)
new_resource.run_action
end
#########
# Classes
@ -16,12 +53,6 @@ module DockerCookbook
end
end
class ShellCommandString < String
def ==(other)
other.is_a?(String) && Shellwords.shellwords(self) == Shellwords.shellwords(other)
end
end
class PartialHash < Hash
def ==(other)
other.is_a?(Hash) && all? { |key, val| other.key?(key) && other[key] == val }
@ -36,26 +67,6 @@ module DockerCookbook
#
################
ArrayType = property_type(
is: [Array, nil],
coerce: proc { |v| v.nil? ? nil : Array(v) }
) unless defined?(ArrayType)
Boolean = property_type(
is: [true, false],
default: false
) unless defined?(Boolean)
NonEmptyArray = property_type(
is: [Array, nil],
coerce: proc { |v| Array(v).empty? ? nil : Array(v) }
) unless defined?(NonEmptyArray)
ShellCommand = property_type(
is: [String],
coerce: proc { |v| coerce_shell_command(v) }
) unless defined?(ShellCommand)
UnorderedArrayType = property_type(
is: [UnorderedArray, nil],
coerce: proc { |v| v.nil? ? nil : UnorderedArray.new(Array(v)) }
@ -70,21 +81,57 @@ module DockerCookbook
# Resource properties
#####################
property :api_retries, Integer, default: 3, desired_state: false
property :read_timeout, [Integer, nil], default: 60, desired_state: false
property :write_timeout, [Integer, nil], desired_state: false
property :running_wait_time, [Integer, nil], default: 20, desired_state: false
property :api_retries, Integer,
default: 3,
desired_state: false
property :tls, [Boolean, nil], default: lazy { default_tls }, desired_state: false
property :tls_verify, [Boolean, nil], default: lazy { default_tls_verify }, desired_state: false
property :tls_ca_cert, [String, nil], default: lazy { default_tls_cert_path('ca') }, desired_state: false
property :tls_server_cert, [String, nil], desired_state: false
property :tls_server_key, [String, nil], desired_state: false
property :tls_client_cert, [String, nil], default: lazy { default_tls_cert_path('cert') }, desired_state: false
property :tls_client_key, [String, nil], default: lazy { default_tls_cert_path('key') }, desired_state: false
property :read_timeout, Integer,
default: 60,
desired_state: false
property :write_timeout, Integer,
desired_state: false
property :running_wait_time, Integer,
default: 20,
desired_state: false
property :tls, [TrueClass, FalseClass, nil],
default: lazy { ENV['DOCKER_TLS'] },
desired_state: false
property :tls_verify, [TrueClass, FalseClass, nil],
default: lazy { ENV['DOCKER_TLS_VERIFY'] },
desired_state: false
property :tls_ca_cert, [String, nil],
default: lazy { ENV['DOCKER_CERT_PATH'] ? "#{ENV['DOCKER_CERT_PATH']}/ca.pem" : nil },
desired_state: false
property :tls_server_cert, String,
desired_state: false
property :tls_server_key, String,
desired_state: false
property :tls_client_cert, [String, nil],
default: lazy { ENV['DOCKER_CERT_PATH'] ? "#{ENV['DOCKER_CERT_PATH']}/cert.pem" : nil },
desired_state: false
property :tls_client_key, [String, nil],
default: lazy { ENV['DOCKER_CERT_PATH'] ? "#{ENV['DOCKER_CERT_PATH']}/key.pem" : nil },
desired_state: false
alias_method :tlscacert, :tls_ca_cert
alias_method :tlscert, :tls_server_cert
alias_method :tlskey, :tls_server_key
alias_method :tlsverify, :tls_verify
declare_action_class.class_eval do
include DockerHelpers::Authentication
# https://github.com/docker/docker/blob/4fcb9ac40ce33c4d6e08d5669af6be5e076e2574/registry/auth.go#L231
def parse_registry_host(val)
val.sub(%r{https?://}, '').split('/').first
end
end
end
end

View File

@ -1,93 +1,78 @@
module DockerCookbook
class DockerContainer < DockerBase
require 'docker'
require 'shellwords'
require_relative 'helpers_container'
include DockerHelpers::Container
resource_name :docker_container
###########################################################
# In Chef 12.5 and later, we no longer have to use separate
# classes for resource and providers. Instead, we have
# everything in a single class.
#
# For the purposes of my own sanity, I'm going to place all the
# "resource" related bits at the top of the files, and the
# providerish bits at the bottom.
#
#
# Methods for default values and coersion are found in
# helpers_container.rb
###########################################################
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
# Begin classic Chef "resource" section
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
# The non-standard types Boolean, ArrayType, ShellCommand, etc
# are found in the DockerBase class.
property :container_name, String, name_property: true
property :repo, String, default: lazy { container_name }
property :tag, String, default: 'latest'
property :command, ShellCommand
property :attach_stderr, Boolean, default: false, desired_state: false
property :attach_stdin, Boolean, default: false, desired_state: false
property :attach_stdout, Boolean, default: false, desired_state: false
property :autoremove, Boolean, desired_state: false
property :cap_add, NonEmptyArray
property :cap_drop, NonEmptyArray
property :command, [Array, String, nil], coerce: proc { |v| v.is_a?(String) ? ::Shellwords.shellwords(v) : v }
property :attach_stderr, [TrueClass, FalseClass], default: false, desired_state: false
property :attach_stdin, [TrueClass, FalseClass], default: false, desired_state: false
property :attach_stdout, [TrueClass, FalseClass], default: false, desired_state: false
property :autoremove, [TrueClass, FalseClass], default: false, desired_state: false
property :cap_add, [Array, nil], coerce: proc { |v| Array(v).empty? ? nil : Array(v) }
property :cap_drop, [Array, nil], coerce: proc { |v| Array(v).empty? ? nil : Array(v) }
property :cgroup_parent, String, default: ''
property :cpu_shares, [Integer, nil], default: 0
property :cpu_shares, Integer, default: 0
property :cpuset_cpus, String, default: ''
property :detach, Boolean, default: true, desired_state: false
property :detach, [TrueClass, FalseClass], default: true, desired_state: false
property :devices, Array, default: []
property :dns, Array, default: []
property :dns_search, Array, default: []
property :domain_name, String, default: ''
property :entrypoint, ShellCommand
property :entrypoint, [Array, String, nil], coerce: proc { |v| v.is_a?(String) ? ::Shellwords.shellwords(v) : v }
property :env, UnorderedArrayType, default: []
property :extra_hosts, NonEmptyArray
property :env_file, [Array, String], coerce: proc { |v| coerce_env_file(v) }, default: [], desired_state: false
property :extra_hosts, [Array, nil], coerce: proc { |v| Array(v).empty? ? nil : Array(v) }
property :exposed_ports, PartialHashType, default: {}
property :force, Boolean, desired_state: false
property :host, [String, nil], default: lazy { default_host }, desired_state: false
property :force, [TrueClass, FalseClass], default: false, desired_state: false
property :health_check, Hash, default: {}
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
property :hostname, String
property :ipc_mode, String, default: ''
property :kernel_memory, [String, Integer], coerce: proc { |v| coerce_to_bytes(v) }, default: 0
property :labels, [String, Array, Hash], default: {}, coerce: proc { |v| coerce_labels(v) }
property :links, UnorderedArrayType, coerce: proc { |v| coerce_links(v) }
property :log_driver, %w( json-file syslog journald gelf fluentd awslogs splunk etwlogs gcplogs none ), default: 'json-file', desired_state: false
property :log_opts, [Hash, nil], coerce: proc { |v| coerce_log_opts(v) }, desired_state: false
property :init, [TrueClass, FalseClass, nil]
property :ip_address, String
property :mac_address, String
property :memory, Integer, default: 0
property :memory_swap, Integer, default: 0
property :network_disabled, Boolean, default: false
property :network_mode, [String, NilClass], default: 'bridge'
property :open_stdin, Boolean, default: false, desired_state: false
property :outfile, [String, NilClass]
property :memory, [String, Integer], coerce: proc { |v| coerce_to_bytes(v) }, default: 0
property :memory_swap, [String, Integer], coerce: proc { |v| coerce_to_bytes(v) }, default: 0
property :memory_swappiness, Integer, default: 0
property :memory_reservation, Integer, coerce: proc { |v| coerce_to_bytes(v) }, default: 0
property :network_disabled, [TrueClass, FalseClass], default: false
property :network_mode, String, default: 'bridge'
property :network_aliases, [String, Array], default: [], coerce: proc { |v| Array(v) }
property :oom_kill_disable, [TrueClass, FalseClass], default: false
property :oom_score_adj, Integer, default: -500
property :open_stdin, [TrueClass, FalseClass], default: false, desired_state: false
property :outfile, String
property :port_bindings, PartialHashType, default: {}
property :pid_mode, String, default: ''
property :privileged, Boolean, default: false
property :publish_all_ports, Boolean, default: false
property :remove_volumes, Boolean
property :privileged, [TrueClass, FalseClass], default: false
property :publish_all_ports, [TrueClass, FalseClass], default: false
property :remove_volumes, [TrueClass, FalseClass], default: false
property :restart_maximum_retry_count, Integer, default: 0
property :restart_policy, String
property :ro_rootfs, Boolean, default: false
property :security_opts, [String, ArrayType]
property :runtime, String, default: 'runc'
property :ro_rootfs, [TrueClass, FalseClass], default: false
property :security_opt, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :shm_size, [String, Integer], default: '64m', coerce: proc { |v| coerce_to_bytes(v) }
property :signal, String, default: 'SIGTERM'
property :stdin_once, Boolean, default: false, desired_state: false
property :stdin_once, [TrueClass, FalseClass], default: false, desired_state: false
property :sysctls, Hash, default: {}
property :timeout, [Integer, nil], desired_state: false
property :tty, Boolean, default: false
property :timeout, Integer, desired_state: false
property :tty, [TrueClass, FalseClass], default: false
property :ulimits, [Array, nil], coerce: proc { |v| coerce_ulimits(v) }
property :user, String, default: ''
property :userns_mode, String, default: ''
property :uts_mode, String, default: ''
property :volumes, PartialHashType, default: {}, coerce: proc { |v| coerce_volumes(v) }
property :volumes_from, ArrayType
property :volumes_from, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :volume_driver, String
property :working_dir, [String, NilClass], default: ''
property :working_dir, String, default: ''
# Used to store the bind property since binds is an alias to volumes
property :volumes_binds, Array
@ -95,32 +80,324 @@ module DockerCookbook
# Used to store the state of the Docker container
property :container, Docker::Container, desired_state: false
# Used to store the state of the Docker container create options
property :create_options, Hash, default: {}, desired_state: false
# Used by :stop action. If the container takes longer than this
# many seconds to stop, kill itinstead. -1 (the default) means
# many seconds to stop, kill it instead. A nil value (the default) means
# never kill the container.
property :kill_after, Numeric, default: -1, desired_state: false
property :kill_after, [Integer, NilClass], default: nil, desired_state: false
alias cmd command
alias additional_host extra_hosts
alias rm autoremove
alias remove_automatically autoremove
alias host_name hostname
alias domainname domain_name
alias dnssearch dns_search
alias restart_maximum_retries restart_maximum_retry_count
alias volume volumes
alias binds volumes
alias volume_from volumes_from
alias destination outfile
alias workdir working_dir
alias_method :cmd, :command
alias_method :additional_host, :extra_hosts
alias_method :rm, :autoremove
alias_method :remove_automatically, :autoremove
alias_method :host_name, :hostname
alias_method :domainname, :domain_name
alias_method :dnssearch, :dns_search
alias_method :restart_maximum_retries, :restart_maximum_retry_count
alias_method :volume, :volumes
alias_method :binds, :volumes
alias_method :volume_from, :volumes_from
alias_method :destination, :outfile
alias_method :workdir, :working_dir
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
# Begin classic Chef "provider" section
# ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
###################
# Property helpers
###################
########################################################
def coerce_labels(v)
case v
when Hash, nil
v
else
Array(v).each_with_object({}) do |label, h|
parts = label.split(':')
h[parts[0]] = parts[1..-1].join(':')
end
end
end
def coerce_links(v)
case v
when DockerBase::UnorderedArray, nil
v
else
return nil if v.empty?
# Parse docker input of /source:/container_name/dest into source:dest
DockerBase::UnorderedArray.new(Array(v)).map! do |link|
if link =~ %r{^/(?<source>.+):/#{name}/(?<dest>.+)}
link = "#{Regexp.last_match[:source]}:#{Regexp.last_match[:dest]}"
end
link
end
end
end
def to_bytes(v)
n = v.to_i
u = v.gsub(/\d/, '').upcase
multiplier = case u
when 'B'
1
when 'K'
1024**1
when 'M'
1024**2
when 'G'
1024**3
when 'T'
1024**4
when 'P'
1024**5
when 'E'
1024**6
when 'Z'
1024**7
when 'Y'
1024**8
else
1
end
n * multiplier
end
def coerce_to_bytes(v)
case v
when Integer, nil
v
else
to_bytes(v)
end
end
def coerce_log_opts(v)
case v
when Hash, nil
v
else
Array(v).each_with_object({}) do |log_opt, memo|
key, value = log_opt.split('=', 2)
memo[key] = value
end
end
end
def coerce_ulimits(v)
return v if v.nil?
Array(v).map do |u|
u = "#{u['Name']}=#{u['Soft']}:#{u['Hard']}" if u.is_a?(Hash)
u
end
end
def coerce_volumes(v)
case v
when DockerBase::PartialHash, nil
v
when Hash
DockerBase::PartialHash[v]
else
b = []
v = Array(v).to_a # in case v.is_A?(Chef::Node::ImmutableArray)
v.delete_if do |x|
parts = x.split(':')
b << x if parts.length > 1
end
b = nil if b.empty?
volumes_binds b
return DockerBase::PartialHash.new if v.empty?
v.each_with_object(DockerBase::PartialHash.new) { |volume, h| h[volume] = {} }
end
end
def state
# Always return the latest state, see #510
Docker::Container.get(container_name, {}, connection).info['State']
rescue StandardError
{}
end
def wait_running_state(v)
tries = running_wait_time
tries.times do
return if state['Running'] == v
sleep 1
end
return if state['Running'] == v
# Container failed to reach correct state: Throw an error
desired_state_str = v ? 'running' : 'not running'
raise Docker::Error::TimeoutError, "Container #{container_name} failed to change to #{desired_state_str} state after #{tries} seconds"
end
def port(v = nil)
return @port if v.nil?
exposed_ports coerce_exposed_ports(v)
port_bindings coerce_port_bindings(v)
@port = v
@port
end
def parse_port(v)
_, protocol = v.split('/')
parts = v.split(':')
case parts.length
when 3
host_ip = parts[0]
host_port = parts[1].split('-')
container_port = parts[2].split('-')
when 2
host_ip = '0.0.0.0'
host_port = parts[0].split('-')
container_port = parts[1].split('-')
when 1
host_ip = ''
host_port = ['']
container_port = parts[0].split('-')
end
host_port.map!(&:to_i) unless host_port == ['']
container_port.map!(&:to_i)
if host_port.count > 1
Chef::Log.fatal("FATAL: Invalid port range! #{host_port}") if host_port[0] > host_port[1]
host_port = (host_port[0]..host_port[1]).to_a
end
if container_port.count > 1
Chef::Log.fatal("FATAL: Invalid port range! #{container_port}") if container_port[0] > container_port[1]
container_port = (container_port[0]..container_port[1]).to_a
end
Chef::Log.fatal('FATAL: Port range size does not match!') if host_port.count > 1 && host_port.count != container_port.count
# qualify the port-binding protocol even when it is implicitly tcp #427.
protocol = 'tcp' if protocol.nil?
Array(container_port).map.with_index do |_, i|
{
'host_ip' => host_ip,
'host_port' => host_port[i].to_s,
'container_port' => "#{container_port[i]}/#{protocol}",
}
end
end
def coerce_exposed_ports(v)
case v
when Hash, nil
v
else
x = Array(v).map { |a| parse_port(a) }
x.flatten!
x.each_with_object({}) do |y, h|
h[y['container_port']] = {}
end
end
end
def coerce_port_bindings(v)
case v
when Hash, nil
v
else
x = Array(v).map { |a| parse_port(a) }
x.flatten!
x.each_with_object({}) do |y, h|
h[y['container_port']] = [] unless h[y['container_port']]
h[y['container_port']] << {
'HostIp' => y['host_ip'],
'HostPort' => y['host_port'],
}
end
end
end
def coerce_env_file(v)
return v if v.empty?
Array(v).map { |f| ::File.readlines(f).map(&:strip) }.flatten
end
# log_driver and log_opts really handle this
def log_config(value = Chef::NOT_PASSED)
if value != Chef::NOT_PASSED
@log_config = value
log_driver value['Type']
log_opts value['Config']
end
return @log_config if defined?(@log_config)
def_logcfg = {}
def_logcfg['Type'] = log_driver if property_is_set?(:log_driver)
def_logcfg['Config'] = log_opts if property_is_set?(:log_opts)
def_logcfg = nil if def_logcfg.empty?
def_logcfg
end
# TODO: test image property in serverspec and kitchen, not only in rspec
# for full specs of image parsing, see spec/helpers_container_spec.rb
#
# If you say: `repo 'blah'`
# Image will be: `blah:latest`
#
# If you say: `repo 'blah'; tag '3.1'`
# Image will be: `blah:3.1`
#
# If you say: `image 'blah'`
# Repo will be: `blah`
# Tag will be: `latest`
#
# If you say: `image 'blah:3.1'`
# Repo will be: `blah`
# Tag will be: `3.1`
#
# If you say: `image 'repo/blah'`
# Repo will be: `repo/blah`
# Tag will be: `latest`
#
# If you say: `image 'repo/blah:3.1'`
# Repo will be: `repo/blah`
# Tag will be: `3.1`
#
# If you say: `image 'repo:1337/blah'`
# Repo will be: `repo:1337/blah`
# Tag will be: `latest'
#
# If you say: `image 'repo:1337/blah:3.1'`
# Repo will be: `repo:1337/blah`
# Tag will be: `3.1`
#
def image(image = nil)
if image
if image.include?('/')
# pathological case, a ':' may be present which starts the 'port'
# part of the image name and not a tag. example: 'host:1337/blah'
# fortunately, tags are only found in the 'basename' part of image
# so we can split on '/' and rebuild once the tag has been parsed.
dirname, _, basename = image.rpartition('/')
r, t = basename.split(':', 2)
r = [dirname, r].join('/')
else
# normal case, the ':' starts the tag part
r, t = image.split(':', 2)
end
repo r
tag t if t
end
"#{repo}:#{tag}"
end
def to_shellwords(command)
command.is_a?(String) ? ::Shellwords.shellwords(command) : command
end
######################
# Load Current Value
########################################################
######################
def to_snake_case(name)
# ExposedPorts -> _exposed_ports
name = name.gsub(/[A-Z]/) { |x| "_#{x.downcase}" }
# _exposed_ports -> exposed_ports
name = name[1..-1] if name.start_with?('_')
name
end
load_current_value do
# Grab the container and assign the container property
@ -149,6 +426,27 @@ module DockerCookbook
restart_maximum_retry_count container.info['HostConfig']['RestartPolicy']['MaximumRetryCount']
volumes_binds container.info['HostConfig']['Binds']
ro_rootfs container.info['HostConfig']['ReadonlyRootfs']
ip_address ip_address_from_container_networks(container) unless ip_address_from_container_networks(container).nil?
end
# Gets the ip address from the existing container
# current docker api of 1.16 does not have ['NetworkSettings']['Networks']
# For docker > 1.21 - use ['NetworkSettings']['Networks']
#
# @param container [Docker::Container] A container object
# @returns [String] An ip_address
def ip_address_from_container_networks(container)
# We use the first value in 'Networks'
# We can't assume it will be 'bridged'
# It might also not match the new_resource value
if container.info['NetworkSettings'] &&
container.info['NetworkSettings']['Networks'] &&
container.info['NetworkSettings']['Networks'].values[0] &&
container.info['NetworkSettings']['Networks'].values[0]['IPAMConfig'] &&
container.info['NetworkSettings']['Networks'].values[0]['IPAMConfig']['IPv4Address']
# Return the ip address listed
container.info['NetworkSettings']['Networks'].values[0]['IPAMConfig']['IPv4Address']
end
end
#########
@ -158,23 +456,6 @@ module DockerCookbook
# Super handy visual reference!
# http://gliderlabs.com/images/docker_events.png
default_action :run
declare_action_class.class_eval do
def whyrun_supported?
true
end
def call_action(action)
send("action_#{action}")
load_current_resource
end
def state
current_resource ? current_resource.state : {}
end
end
# Loads container specific labels excluding those of engine or image.
# This insures idempotency.
def load_container_labels
@ -189,56 +470,11 @@ module DockerCookbook
public_send(:labels, labels)
end
def validate_container_create
if property_is_set?(:restart_policy) &&
restart_policy != 'no' &&
restart_policy != 'always' &&
restart_policy != 'unless-stopped' &&
restart_policy != 'on-failure'
raise Chef::Exceptions::ValidationFailed, 'restart_policy must be either no, always, unless-stopped, or on-failure.'
end
if autoremove == true && (property_is_set?(:restart_policy) && restart_policy != 'no')
raise Chef::Exceptions::ValidationFailed, 'Conflicting options restart_policy and autoremove.'
end
if detach == true &&
(
attach_stderr == true ||
attach_stdin == true ||
attach_stdout == true ||
stdin_once == true
)
raise Chef::Exceptions::ValidationFailed, 'Conflicting options detach, attach_stderr, attach_stdin, attach_stdout, stdin_once.'
end
if network_mode == 'host' &&
(
!(hostname.nil? || hostname.empty?) ||
!(mac_address.nil? || mac_address.empty?)
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname or mac_address when network_mode is host.'
end
if network_mode == 'container' &&
(
!(hostname.nil? || hostname.empty?) ||
!(dns.nil? || dns.empty?) ||
!(dns_search.nil? || dns_search.empty?) ||
!(mac_address.nil? || mac_address.empty?) ||
!(extra_hosts.nil? || extra_hosts.empty?) ||
!(exposed_ports.nil? || exposed_ports.empty?) ||
!(port_bindings.nil? || port_bindings.empty?) ||
!(publish_all_ports.nil? || publish_all_ports.empty?) ||
!port.nil?
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname, dns, dns_search, mac_address, extra_hosts, exposed_ports, port_bindings, publish_all_ports, port when network_mode is container.'
end
end
def parsed_hostname
return nil if network_mode == 'host'
hostname
action :run do
validate_container_create
call_action(:create)
call_action(:start)
call_action(:delete) if new_resource.autoremove
end
action :create do
@ -249,73 +485,94 @@ module DockerCookbook
with_retries do
config = {
'name' => container_name,
'Image' => "#{repo}:#{tag}",
'Labels' => labels,
'Cmd' => to_shellwords(command),
'AttachStderr' => attach_stderr,
'AttachStdin' => attach_stdin,
'AttachStdout' => attach_stdout,
'Domainname' => domain_name,
'Entrypoint' => to_shellwords(entrypoint),
'Env' => env,
'ExposedPorts' => exposed_ports,
'name' => new_resource.container_name,
'Image' => "#{new_resource.repo}:#{new_resource.tag}",
'Labels' => new_resource.labels,
'Cmd' => to_shellwords(new_resource.command),
'AttachStderr' => new_resource.attach_stderr,
'AttachStdin' => new_resource.attach_stdin,
'AttachStdout' => new_resource.attach_stdout,
'Domainname' => new_resource.domain_name,
'Entrypoint' => to_shellwords(new_resource.entrypoint),
'Env' => new_resource.env + new_resource.env_file,
'ExposedPorts' => new_resource.exposed_ports,
'Hostname' => parsed_hostname,
'MacAddress' => mac_address,
'NetworkDisabled' => network_disabled,
'OpenStdin' => open_stdin,
'StdinOnce' => stdin_once,
'Tty' => tty,
'User' => user,
'Volumes' => volumes,
'WorkingDir' => working_dir,
'MacAddress' => new_resource.mac_address,
'NetworkDisabled' => new_resource.network_disabled,
'OpenStdin' => new_resource.open_stdin,
'StdinOnce' => new_resource.stdin_once,
'Tty' => new_resource.tty,
'User' => new_resource.user,
'Volumes' => new_resource.volumes,
'WorkingDir' => new_resource.working_dir,
'HostConfig' => {
'Binds' => volumes_binds,
'CapAdd' => cap_add,
'CapDrop' => cap_drop,
'CgroupParent' => cgroup_parent,
'CpuShares' => cpu_shares,
'CpusetCpus' => cpuset_cpus,
'Devices' => devices,
'Dns' => dns,
'DnsSearch' => dns_search,
'ExtraHosts' => extra_hosts,
'IpcMode' => ipc_mode,
'Links' => links,
'Binds' => new_resource.volumes_binds,
'CapAdd' => new_resource.cap_add,
'CapDrop' => new_resource.cap_drop,
'CgroupParent' => new_resource.cgroup_parent,
'CpuShares' => new_resource.cpu_shares,
'CpusetCpus' => new_resource.cpuset_cpus,
'Devices' => new_resource.devices,
'Dns' => new_resource.dns,
'DnsSearch' => new_resource.dns_search,
'ExtraHosts' => new_resource.extra_hosts,
'IpcMode' => new_resource.ipc_mode,
'Init' => new_resource.init,
'KernelMemory' => new_resource.kernel_memory,
'Links' => new_resource.links,
'LogConfig' => log_config,
'Memory' => memory,
'MemorySwap' => memory_swap,
'NetworkMode' => network_mode,
'Privileged' => privileged,
'PidMode' => pid_mode,
'PortBindings' => port_bindings,
'PublishAllPorts' => publish_all_ports,
'Memory' => new_resource.memory,
'MemorySwap' => new_resource.memory_swap,
'MemorySwappiness' => new_resource.memory_swappiness,
'MemoryReservation' => new_resource.memory_reservation,
'NetworkMode' => new_resource.network_mode,
'OomKillDisable' => new_resource.oom_kill_disable,
'OomScoreAdj' => new_resource.oom_score_adj,
'Privileged' => new_resource.privileged,
'PidMode' => new_resource.pid_mode,
'PortBindings' => new_resource.port_bindings,
'PublishAllPorts' => new_resource.publish_all_ports,
'RestartPolicy' => {
'Name' => restart_policy,
'MaximumRetryCount' => restart_maximum_retry_count,
'Name' => new_resource.restart_policy,
'MaximumRetryCount' => new_resource.restart_maximum_retry_count,
},
'ReadonlyRootfs' => ro_rootfs,
'Sysctls' => sysctls,
'ReadonlyRootfs' => new_resource.ro_rootfs,
'Runtime' => new_resource.runtime,
'SecurityOpt' => new_resource.security_opt,
'ShmSize' => new_resource.shm_size,
'Sysctls' => new_resource.sysctls,
'Ulimits' => ulimits_to_hash,
'UsernsMode' => userns_mode,
'UTSMode' => uts_mode,
'VolumesFrom' => volumes_from,
'VolumeDriver' => volume_driver,
'UsernsMode' => new_resource.userns_mode,
'UTSMode' => new_resource.uts_mode,
'VolumesFrom' => new_resource.volumes_from,
'VolumeDriver' => new_resource.volume_driver,
},
}
net_config = {
'NetworkingConfig' => {
'EndpointsConfig' => {
network_mode => {
new_resource.network_mode => {
'IPAMConfig' => {
'IPv4Address' => ip_address,
'IPv4Address' => new_resource.ip_address,
},
'Aliases' => new_resource.network_aliases,
},
},
},
} if network_mode
} if new_resource.network_mode
config.merge! net_config
# Remove any options not supported in windows
if platform?('windows')
config['HostConfig'].delete('MemorySwappiness')
end
unless new_resource.health_check.empty?
config['Healthcheck'] = new_resource.health_check
end
# Store the state of the options and create the container
new_resource.create_options = config
Docker::Container.create(config, connection)
end
end
@ -324,44 +581,40 @@ module DockerCookbook
action :start do
return if state['Restarting']
return if state['Running']
converge_by "starting #{container_name}" do
converge_by "starting #{new_resource.container_name}" do
with_retries do
container.start
timeout ? container.wait(timeout) : container.wait unless detach
current_resource.container.start
unless new_resource.detach
new_resource.timeout ? current_resource.container.wait(new_resource.timeout) : current_resource.container.wait
end
end
wait_running_state(true) if detach
wait_running_state(true) if new_resource.detach
end
end
action :stop do
return unless state['Running']
kill_after_str = " (will kill after #{kill_after}s)" if kill_after != -1
converge_by "stopping #{container_name} #{kill_after_str}" do
kill_after_str = "(will kill after #{new_resource.kill_after}s)" if new_resource.kill_after
converge_by "stopping #{new_resource.container_name} #{kill_after_str}" do
begin
with_retries do
container.stop!('timeout' => kill_after)
current_resource.container.stop!('timeout' => new_resource.kill_after)
wait_running_state(false)
end
rescue Docker::Error::TimeoutError
raise Docker::Error::TimeoutError, "Container failed to stop, consider adding kill_after to the container #{container_name}"
raise Docker::Error::TimeoutError, "Container failed to stop, consider adding kill_after to the container #{new_resource.container_name}"
end
end
end
action :kill do
return unless state['Running']
converge_by "killing #{container_name}" do
with_retries { container.kill(signal: signal) }
converge_by "killing #{new_resource.container_name}" do
with_retries { current_resource.container.kill(signal: new_resource.signal) }
end
end
action :run do
validate_container_create
call_action(:create)
call_action(:start)
call_action(:delete) if autoremove
end
action :run_if_missing do
return if current_resource
call_action(:run)
@ -369,28 +622,28 @@ module DockerCookbook
action :pause do
return if state['Paused']
converge_by "pausing #{container_name}" do
with_retries { container.pause }
converge_by "pausing #{new_resource.container_name}" do
with_retries { current_resource.container.pause }
end
end
action :unpause do
return if current_resource && !state['Paused']
converge_by "unpausing #{container_name}" do
with_retries { container.unpause }
converge_by "unpausing #{new_resource.container_name}" do
with_retries { current_resource.container.unpause }
end
end
action :restart do
kill_after_str = " (will kill after #{kill_after}s)" if kill_after != -1
converge_by "restarting #{container_name} #{kill_after_str}" do
current_resource ? container.restart('timeout' => kill_after) : call_action(:run)
kill_after_str = " (will kill after #{new_resource.kill_after}s)" if new_resource.kill_after != -1
converge_by "restarting #{new_resource.container_name} #{kill_after_str}" do
current_resource ? current_resource.container.restart('timeout' => new_resource.kill_after) : call_action(:run)
end
end
action :reload do
converge_by "reloading #{container_name}" do
with_retries { container.kill(signal: 'SIGHUP') }
converge_by "reloading #{new_resource.container_name}" do
with_retries { current_resource.container.kill(signal: 'SIGHUP') }
end
end
@ -407,8 +660,8 @@ module DockerCookbook
return unless current_resource
call_action(:unpause)
call_action(:stop)
converge_by "deleting #{container_name}" do
with_retries { container.delete(force: force, v: remove_volumes) }
converge_by "deleting #{new_resource.container_name}" do
with_retries { current_resource.container.delete(force: new_resource.force, v: new_resource.remove_volumes) }
end
end
@ -417,19 +670,92 @@ module DockerCookbook
end
action :commit do
converge_by "committing #{container_name}" do
converge_by "committing #{new_resource.container_name}" do
with_retries do
new_image = container.commit
new_image.tag('repo' => repo, 'tag' => tag, 'force' => force)
new_image = current_resource.container.commit
new_image.tag('repo' => new_resource.repo, 'tag' => new_resource.tag, 'force' => new_resource.force)
end
end
end
action :export do
raise "Please set outfile property on #{container_name}" if outfile.nil?
converge_by "exporting #{container_name}" do
raise "Please set outfile property on #{new_resource.container_name}" if new_resource.outfile.nil?
converge_by "exporting #{new_resource.container_name}" do
with_retries do
::File.open(outfile, 'w') { |f| container.export { |chunk| f.write(chunk) } }
::File.open(new_resource.outfile, 'w') { |f| current_resource.container.export { |chunk| f.write(chunk) } }
end
end
end
declare_action_class.class_eval do
def validate_container_create
if new_resource.property_is_set?(:restart_policy) &&
new_resource.restart_policy != 'no' &&
new_resource.restart_policy != 'always' &&
new_resource.restart_policy != 'unless-stopped' &&
new_resource.restart_policy != 'on-failure'
raise Chef::Exceptions::ValidationFailed, 'restart_policy must be either no, always, unless-stopped, or on-failure.'
end
if new_resource.autoremove == true && (new_resource.property_is_set?(:restart_policy) && restart_policy != 'no')
raise Chef::Exceptions::ValidationFailed, 'Conflicting options restart_policy and autoremove.'
end
if new_resource.detach == true &&
(
new_resource.attach_stderr == true ||
new_resource.attach_stdin == true ||
new_resource.attach_stdout == true ||
new_resource.stdin_once == true
)
raise Chef::Exceptions::ValidationFailed, 'Conflicting options detach, attach_stderr, attach_stdin, attach_stdout, stdin_once.'
end
if new_resource.network_mode == 'host' &&
(
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?)
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname or mac_address when network_mode is host.'
end
if new_resource.network_mode == 'container' &&
(
!(new_resource.hostname.nil? || new_resource.hostname.empty?) ||
!(new_resource.dns.nil? || new_resource.dns.empty?) ||
!(new_resource.dns_search.nil? || new_resource.dns_search.empty?) ||
!(new_resource.mac_address.nil? || new_resource.mac_address.empty?) ||
!(new_resource.extra_hosts.nil? || new_resource.extra_hosts.empty?) ||
!(new_resource.exposed_ports.nil? || new_resource.exposed_ports.empty?) ||
!(new_resource.port_bindings.nil? || new_resource.port_bindings.empty?) ||
!(new_resource.publish_all_ports.nil? || new_resource.publish_all_ports.empty?) ||
!new_resource.port.nil?
)
raise Chef::Exceptions::ValidationFailed, 'Cannot specify hostname, dns, dns_search, mac_address, extra_hosts, exposed_ports, port_bindings, publish_all_ports, port when network_mode is container.'
end
end
def parsed_hostname
return nil if new_resource.network_mode == 'host'
new_resource.hostname
end
def call_action(action)
send("action_#{action}")
load_current_resource
end
def state
current_resource ? current_resource.state : {}
end
def ulimits_to_hash
return nil if new_resource.ulimits.nil?
new_resource.ulimits.map do |u|
name = u.split('=')[0]
soft = u.split('=')[1].split(':')[0]
hard = u.split('=')[1].split(':')[1]
{ 'Name' => name, 'Soft' => soft.to_i, 'Hard' => hard.to_i }
end
end
end

View File

@ -2,19 +2,18 @@ module DockerCookbook
class DockerExec < DockerBase
resource_name :docker_exec
property :host, [String, nil], default: lazy { default_host }
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
property :command, Array
property :container, String
property :timeout, Numeric, default: 60
property :timeout, Numeric, default: 60, desired_state: false
property :container_obj, Docker::Container, desired_state: false
alias cmd command
alias_method :cmd, :command
action :run do
converge_by "executing #{command} on #{container}" do
with_retries { container_obj Docker::Container.get(container, {}, connection) }
container_obj.exec(command, wait: timeout)
converge_by "executing #{new_resource.command} on #{new_resource.container}" do
with_retries { new_resource.container_obj Docker::Container.get(new_resource.container, {}, connection) }
new_resource.container_obj.exec(new_resource.command, wait: new_resource.timeout)
end
end
end

View File

@ -1,28 +1,25 @@
module DockerCookbook
class DockerImage < DockerBase
require 'docker'
require_relative 'helpers_image'
resource_name :docker_image
# Modify the default of read_timeout from 60 to 120
property :read_timeout, default: 120, desired_state: false
# https://docs.docker.com/reference/api/docker_remote_api_v1.20/
property :destination, [String, nil]
property :force, Boolean, default: false
property :host, [String, nil], default: lazy { default_host }, desired_state: false
property :nocache, Boolean, default: false
property :noprune, Boolean, default: false
# https://docs.docker.com/engine/api/v1.35/#tag/Image
property :destination, String
property :force, [TrueClass, FalseClass], default: false, desired_state: false
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
property :nocache, [TrueClass, FalseClass], default: false
property :noprune, [TrueClass, FalseClass], default: false
property :repo, String, name_property: true
property :rm, Boolean, default: true
property :rm, [TrueClass, FalseClass], default: true
property :source, String
property :tag, String, default: 'latest'
alias image repo
alias image_name repo
alias no_cache nocache
alias no_prune noprune
alias_method :image, :repo
alias_method :image_name, :repo
alias_method :no_cache, :nocache
alias_method :no_prune, :noprune
#########
# Actions
@ -30,13 +27,6 @@ module DockerCookbook
default_action :pull
declare_action_class.class_eval do
include DockerHelpers::Image
def whyrun_supported?
true
end
end
action :build do
converge_by "Build image #{image_identifier}" do
build_image
@ -90,5 +80,110 @@ module DockerCookbook
load_image
end
end
declare_action_class.class_eval do
################
# Helper methods
################
def build_from_directory
i = Docker::Image.build_from_dir(
new_resource.source,
{
'nocache' => new_resource.nocache,
'rm' => new_resource.rm,
},
connection
)
i.tag('repo' => new_resource.repo, 'tag' => new_resource.tag, 'force' => new_resource.force)
end
def build_from_dockerfile
i = Docker::Image.build(
IO.read(new_resource.source),
{
'nocache' => new_resource.nocache,
'rm' => new_resource.rm,
},
connection
)
i.tag('repo' => new_resource.repo, 'tag' => new_resource.tag, 'force' => new_resource.force)
end
def build_from_tar
i = Docker::Image.build_from_tar(
::File.open(new_resource.source, 'r'),
{
'nocache' => new_resource.nocache,
'rm' => new_resource.rm,
},
connection
)
i.tag('repo' => new_resource.repo, 'tag' => new_resource.tag, 'force' => new_resource.force)
end
def build_image
if ::File.directory?(new_resource.source)
build_from_directory
elsif ::File.extname(new_resource.source) == '.tar'
build_from_tar
else
build_from_dockerfile
end
end
def image_identifier
"#{new_resource.repo}:#{new_resource.tag}"
end
def import_image
with_retries do
i = Docker::Image.import(new_resource.source, {}, connection)
i.tag('repo' => new_resource.repo, 'tag' => new_resource.tag, 'force' => new_resource.force)
end
end
def pull_image
with_retries do
creds = credentails
original_image = Docker::Image.get(image_identifier, {}, connection) if Docker::Image.exist?(image_identifier, {}, connection)
new_image = Docker::Image.create({ 'fromImage' => image_identifier }, creds, connection)
!(original_image && original_image.id.start_with?(new_image.id))
end
end
def push_image
with_retries do
creds = credentails
i = Docker::Image.get(image_identifier, {}, connection)
i.push(creds, repo_tag: image_identifier)
end
end
def remove_image
with_retries do
i = Docker::Image.get(image_identifier, {}, connection)
i.remove(force: new_resource.force, noprune: new_resource.noprune)
end
end
def save_image
with_retries do
Docker::Image.save(new_resource.repo, new_resource.destination, connection)
end
end
def load_image
with_retries do
Docker::Image.load(new_resource.source, {}, connection)
end
end
def credentails
registry_host = parse_registry_host(new_resource.repo)
node.run_state['docker_auth'] && node.run_state['docker_auth'][registry_host] || (node.run_state['docker_auth'] ||= {})['index.docker.io']
end
end
end
end

View File

@ -0,0 +1,39 @@
module DockerCookbook
class DockerImagePrune < DockerBase
resource_name :docker_image_prune
# Requires docker API v1.25
# Modify the default of read_timeout from 60 to 120
property :read_timeout, default: 120, desired_state: false
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
# https://docs.docker.com/engine/api/v1.35/#operation/ImagePrune
property :dangling, [TrueClass, FalseClass], default: true
property :prune_until, String
# https://docs.docker.com/engine/reference/builder/#label
property :with_label, String
property :without_label, String
#########
# Actions
#########
default_action :prune
action :prune do
# Have to call this method ourselves due to
# https://github.com/swipely/docker-api/pull/507
json = generate_json(new_resource)
# Post
res = connection.post('/images/prune', json)
Chef::Log.info res
end
def generate_json(new_resource)
opts = { filters: ["dangling=#{new_resource.dangling}"] }
opts[:filters].push("until=#{new_resource.prune_until}") if new_resource.property_is_set?(:prune_until)
opts[:filters].push("label=#{new_resource.with_label}") if new_resource.property_is_set?(:with_label)
opts[:filters].push("label!=#{new_resource.without_label}") if new_resource.property_is_set?(:without_label)
opts.to_json
end
end
end

View File

@ -1,40 +0,0 @@
module DockerCookbook
class DockerInstallationBinary < DockerBase
require_relative 'helpers_installation_binary'
include DockerHelpers::InstallationBinary
#####################
# Resource properties
#####################
resource_name :docker_installation_binary
property :checksum, String, default: lazy { default_checksum }, desired_state: false
property :source, String, default: lazy { default_source }, desired_state: false
property :version, String, default: lazy { default_version }, desired_state: false
default_action :create
#########
# Actions
#########
action :create do
# Pull a precompiled binary off the network
remote_file docker_bin do
source new_resource.source
checksum new_resource.checksum
owner 'root'
group 'root'
mode '0755'
action :create
end
end
action :delete do
file docker_bin do
action :delete
end
end
end
end

View File

@ -1,14 +1,12 @@
module DockerCookbook
class DockerInstallationPackage < DockerBase
require_relative 'helpers_installation_package'
include DockerHelpers::InstallationPackage
# Resource properties
resource_name :docker_installation_package
provides :docker_installation, platform: 'amazon'
property :setup_docker_repo, [TrueClass, FalseClass], default: lazy { platform?('amazon') ? false : true }, desired_state: false
property :repo_channel, String, default: 'stable'
property :package_name, String, default: lazy { default_package_name }, desired_state: false
property :package_version, String, default: lazy { version_string(version) }, desired_state: false
property :version, String, default: lazy { default_docker_version }, desired_state: false
@ -16,17 +14,161 @@ module DockerCookbook
# Actions
action :create do
package package_name do
version package_version
options package_options
if new_resource.setup_docker_repo
if platform_family?('rhel', 'fedora')
platform = platform?('fedora') ? 'fedora' : 'centos'
yum_repository 'Docker' do
baseurl "https://download.docker.com/linux/#{platform}/#{node['platform_version'].to_i}/x86_64/#{new_resource.repo_channel}"
gpgkey "https://download.docker.com/linux/#{platform}/gpg"
description "Docker #{new_resource.repo_channel.capitalize} repository"
gpgcheck true
enabled true
end
elsif platform_family?('debian')
apt_repository 'Docker' do
components Array(new_resource.repo_channel)
uri "https://download.docker.com/linux/#{node['platform']}"
arch 'amd64'
keyserver 'keyserver.ubuntu.com'
key "https://download.docker.com/linux/#{node['platform']}/gpg"
action :add
end
else
Chef::Log.warn("Cannot setup the Docker repo for platform #{node['platform']}. Skipping.")
end
end
package new_resource.package_name do
version new_resource.package_version unless amazon?
options new_resource.package_options
action :install
end
end
action :delete do
package package_name do
package new_resource.package_name do
action :remove
end
end
# These are helpers for the properties so they are not in an action class
def default_docker_version
'18.06.0'
end
def default_package_name
return 'docker' if amazon?
'docker-ce'
end
def el7?
return true if node['platform_family'] == 'rhel' && node['platform_version'].to_i == 7
false
end
def fedora?
return true if node['platform'] == 'fedora'
false
end
def debuntu?
return true if node['platform_family'] == 'debian'
false
end
def debian?
return true if node['platform'] == 'debian'
false
end
def ubuntu?
return true if node['platform'] == 'ubuntu'
false
end
def jessie?
return true if node['platform'] == 'debian' && node['platform_version'].to_i == 8
false
end
def stretch?
return true if node['platform'] == 'debian' && node['platform_version'].to_i == 9
false
end
def buster?
return true if node['platform'] == 'debian' && node['platform_version'].to_i == 10
false
end
def trusty?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '14.04'
false
end
def xenial?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '16.04'
false
end
def artful?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '17.10'
false
end
def bionic?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '18.04'
false
end
def amazon?
return true if node['platform'] == 'amazon'
false
end
# https://github.com/chef/chef/issues/4103
def version_string(v)
codename = if jessie?
'jessie'
elsif stretch?
'stretch'
elsif buster?
'buster'
elsif trusty?
'trusty'
elsif xenial?
'xenial'
elsif artful?
'artful'
elsif bionic?
'bionic'
end
# https://github.com/seemethere/docker-ce-packaging/blob/9ba8e36e8588ea75209d813558c8065844c953a0/deb/gen-deb-ver#L16-L20
test_version = '3'
if v.to_f < 17.06 && debuntu?
return "#{v}~ce-0~debian-#{codename}" if debian?
return "#{v}~ce-0~ubuntu-#{codename}" if ubuntu?
elsif v == '17.03.3' && el7?
return "#{v}.ce-1.el7"
elsif v.to_f < 18.06 && !bionic?
return "#{v}.ce-1.el7.centos" if el7?
return "#{v}~ce-0~debian" if debian?
return "#{v}~ce-0~ubuntu" if ubuntu?
elsif v.to_f >= 18.09 && debuntu?
return "5:#{v}~#{test_version}-0~debian-#{codename}" if debian?
return "5:#{v}~#{test_version}-0~ubuntu-#{codename}" if ubuntu?
elsif v.to_f >= 18.09 && el7?
return "#{v}-#{test_version}.el7"
else
return "#{v}.ce" if fedora?
return "#{v}.ce-#{test_version}.el7" if el7?
return "#{v}~ce~#{test_version}-0~debian" if debian?
return "#{v}~ce~#{test_version}-0~ubuntu" if ubuntu?
v
end
end
end
end

View File

@ -1,8 +1,5 @@
module DockerCookbook
class DockerInstallationScript < DockerBase
#####################
# Resource properties
#####################
resource_name :docker_installation_script
provides :docker_installation, os: 'linux'
@ -12,9 +9,9 @@ module DockerCookbook
default_action :create
################
# helper methods
################
#########################
# property helper methods
#########################
def default_script_url
case repo
@ -32,18 +29,16 @@ module DockerCookbook
#########
action :create do
package 'curl' do
action :install
end
package 'curl'
execute 'install docker' do
command "curl -sSL #{script_url} | sh"
command "curl -sSL #{new_resource.script_url} | sh"
creates '/usr/bin/docker'
end
end
action :delete do
package 'docker-engine' do
package %w(docker-ce docker-engine) do
action :remove
end
end

View File

@ -1,25 +1,54 @@
module DockerCookbook
class DockerInstallationTarball < DockerBase
require_relative 'helpers_installation_tarball'
include DockerHelpers::InstallationTarball
#####################
# Resource properties
#####################
resource_name :docker_installation_tarball
property :checksum, String, default: lazy { default_checksum }, desired_state: false
property :source, String, default: lazy { default_source }, desired_state: false
property :version, String, default: lazy { default_version }, desired_state: false
property :channel, String, default: 'stable', desired_state: false
property :version, String, default: '18.06.0', desired_state: false
default_action :create
##################
# Property Helpers
##################
def docker_kernel
node['kernel']['name']
end
def docker_arch
node['kernel']['machine']
end
def default_source
"https://download.docker.com/#{docker_kernel.downcase}/static/#{channel}/#{docker_arch}/docker-#{version}-ce.tgz"
end
def default_checksum
case docker_kernel
when 'Darwin'
case version
when '17.12.0' then 'dc673421e0368c2c970203350a9d0cb739bc498c897e832779369b0b2a9c6192'
when '18.03.0' then '2d44ed2ac1e24cb22b6e72cb16d74fc9e60245a8ac1d4f79475604b804f46d38'
when '18.03.1' then 'bbfb9c599a4fdb45523496c2ead191056ff43d6be90cf0e348421dd56bc3dcf0'
when '18.06.0' then '5489360ae1894375a56255fb821fcf368b33027cd4f4bbaebf5176c05b79f420'
end
when 'Linux'
case version
when '17.12.0' then '692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d'
when '18.03.0' then 'e5dff6245172081dbf14285dafe4dede761f8bc1750310156b89928dbf56a9ee'
when '18.03.1' then '0e245c42de8a21799ab11179a4fce43b494ce173a8a2d6567ea6825d6c5265aa'
when '18.06.0' then '1c2fa625496465c68b856db0ba850eaad7a16221ca153661ca718de4a2217705'
end
end
end
#########
# Actions
#########
action :create do
package 'tar'
# Pull a precompiled binary off the network
remote_file docker_tarball do
source new_resource.source
@ -43,5 +72,22 @@ module DockerCookbook
action :delete
end
end
################
# Action Helpers
################
declare_action_class.class_eval do
def docker_bin_prefix
'/usr/bin'
end
def docker_bin
"#{docker_bin_prefix}/docker"
end
def docker_tarball
"#{Chef::Config[:file_cache_path]}/docker-#{new_resource.version}.tgz"
end
end
end
end

View File

@ -1,25 +1,62 @@
module DockerCookbook
class DockerNetwork < DockerBase
require 'docker'
require_relative 'helpers_network'
include DockerHelpers::Network
resource_name :docker_network
property :auxiliary_addresses, [String, Array, nil], coerce: proc { |v| coerce_auxiliary_addresses(v) }
property :container, String, desired_state: false
property :driver, String
property :driver_opts, PartialHashType
property :enable_ipv6, [TrueClass, FalseClass]
property :gateway, [String, Array, nil], coerce: proc { |v| coerce_gateway(v) }
property :host, [String, nil], default: lazy { default_host }, desired_state: false
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
property :id, String
property :internal, [TrueClass, FalseClass]
property :ip_range, [String, Array, nil], coerce: proc { |v| coerce_ip_range(v) }
property :ipam_driver, String
property :network, Docker::Network, desired_state: false
property :network_name, String, name_property: true
property :subnet, [String, Array, nil], coerce: proc { |v| coerce_subnet(v) }
alias aux_address auxiliary_addresses
alias_method :aux_address, :auxiliary_addresses
###################
# property helpers
###################
def coerce_auxiliary_addresses(v)
ray = []
Array(v).each do |e|
case e
when String, Array, nil
ray += Array(e)
when Hash
e.each { |key, val| ray << "#{key}=#{val}" }
end
end
ray.length == 1 ? ray[0] : ray
end
def coerce_gateway(v)
case v
when String
v.split('/')[0]
when Array
ray = Array(v).map { |a| a.split('/')[0] }
ray.length == 1 ? ray[0] : ray
end
end
def coerce_subnet(v)
Array(v).length == 1 ? Array(v)[0] : v
end
def coerce_ip_range(v)
Array(v).length == 1 ? Array(v)[0] : v
end
####################
# load current value
####################
load_current_value do
begin
@ -55,6 +92,8 @@ module DockerCookbook
driver network.info['Driver']
driver_opts network.info['Options']
internal network.info['Internal']
enable_ipv6 network.info['EnableIPv6']
end
action :create do
@ -63,21 +102,23 @@ module DockerCookbook
with_retries do
options = {}
options['Driver'] = driver if driver
options['Options'] = driver_opts if driver_opts
ipam_options = consolidate_ipam(subnet, ip_range, gateway, aux_address)
options['Driver'] = new_resource.driver if new_resource.driver
options['Options'] = new_resource.driver_opts if new_resource.driver_opts
ipam_options = consolidate_ipam(new_resource.subnet, new_resource.ip_range, new_resource.gateway, new_resource.aux_address)
options['IPAM'] = { 'Config' => ipam_options } unless ipam_options.empty?
options['IPAM']['Driver'] = ipam_driver if ipam_driver
Docker::Network.create(network_name, options)
options['IPAM']['Driver'] = new_resource.ipam_driver if new_resource.ipam_driver
options['EnableIPv6'] = new_resource.enable_ipv6 if new_resource.enable_ipv6
options['Internal'] = new_resource.internal if new_resource.internal
Docker::Network.create(new_resource.network_name, options)
end
end
end
action :delete do
return unless current_resource
converge_by "deleting #{network_name}" do
converge_by "deleting #{new_resource.network_name}" do
with_retries do
network.delete
current_resource.network.delete
end
end
end
@ -87,16 +128,16 @@ module DockerCookbook
end
action :connect do
unless container
unless new_resource.container
raise Chef::Exceptions::ValidationFailed, 'Container id or name is required for action :connect'
end
if current_resource
container_index = network.info['Containers'].values.index { |c| c['Name'] == container }
container_index = current_resource.network.info['Containers'].values.index { |c| c['Name'] == new_resource.container }
if container_index.nil?
converge_by("connect #{container}") do
converge_by("connect #{new_resource.container}") do
with_retries do
network.connect(container)
current_resource.network.connect(new_resource.container)
end
end
end
@ -106,16 +147,16 @@ module DockerCookbook
end
action :disconnect do
unless container
unless new_resource.container
raise Chef::Exceptions::ValidationFailed, 'Container id or name is required for action :disconnect'
end
if current_resource
container_index = network.info['Containers'].values.index { |c| c['Name'] == container }
container_index = current_resource.network.info['Containers'].values.index { |c| c['Name'] == new_resource.container }
unless container_index.nil?
converge_by("disconnect #{container}") do
converge_by("disconnect #{new_resource.container}") do
with_retries do
network.disconnect(container)
current_resource.network.disconnect(new_resource.container)
end
end
end
@ -123,5 +164,82 @@ module DockerCookbook
Chef::Log.warn("Cannot disconnect from #{network_name}: network does not exist")
end
end
declare_action_class.class_eval do
require 'ipaddr'
######
# IPAM
######
def consolidate_ipam(subnets, ranges, gateways, auxaddrs)
subnets = Array(subnets)
ranges = Array(ranges)
gateways = Array(gateways)
auxaddrs = Array(auxaddrs)
subnets = [] if subnets.empty?
ranges = [] if ranges.empty?
gateways = [] if gateways.empty?
auxaddrs = [] if auxaddrs.empty?
if subnets.size < ranges.size || subnets.size < gateways.size
raise 'every ip-range or gateway myust have a corresponding subnet'
end
data = {}
# Check overlapping subnets
subnets.each do |s|
data.each do |k, _|
if subnet_matches(s, k) || subnet_matches(k, s)
raise 'multiple overlapping subnet configuration is not supported'
end
end
data[s] = { 'Subnet' => s, 'AuxiliaryAddresses' => {} }
end
ranges.each do |r|
match = false
subnets.each do |s|
ok = subnet_matches(s, r)
next unless ok
if data[s].fetch('IPRange', '') != ''
raise 'cannot configure multiple ranges on the same subnet'
end
data[s]['IPRange'] = r
match = true
end
raise "no matching subnet for range #{r}" unless match
end
gateways.each do |g|
subnets.each do |s|
ok = subnet_matches(s, g)
next unless ok
unless data[s].fetch('Gateway', '').empty?
raise "cannot configure multiple gateways (#{g}, #{data[s]['Gateway']}) for the same subnet (#{s})"
end
data[s]['Gateway'] = g
end
end
auxaddrs.each do |aa|
key, a = aa.split('=')
match = false
subnets.each do |s|
# require 'pry' ; binding.pry
ok = subnet_matches(s, a)
next unless ok
data[s]['AuxiliaryAddresses'][key] = a
match = true
end
raise "no matching subnet for aux-address #{a}" unless match
end
data.values
end
def subnet_matches(subnet, data)
IPAddr.new(subnet).include?(IPAddr.new(data))
end
end
end
end

View File

@ -0,0 +1,125 @@
module DockerCookbook
class DockerPlugin < DockerBase
resource_name :docker_plugin
property :local_alias, String, name_property: true
property :remote_tag, String, default: 'latest'
property :remote, [String, nil], default: nil
property :grant_privileges, [Array, TrueClass], default: []
property :options, Hash, default: {}
default_action :install
action :install do
return if plugin_exists?(local_name)
converge_by "Install plugin #{plugin_identifier} as #{local_name}" do
install_plugin
configure_plugin
end
end
action :enable do
converge_by "Enable plugin #{local_name}" do
enable_plugin
end unless plugin_enabled?(local_name)
end
action :disable do
converge_by "Disable plugin #{local_name}" do
disable_plugin
end if plugin_enabled?(local_name)
end
action :update do
converge_by "Configure plugin #{local_name}" do
configure_plugin
end
end
action :remove do
converge_by "Remove plugin #{local_name}" do
remove_plugin
end
end
declare_action_class.class_eval do
def remote_name
return new_resource.remote unless new_resource.remote.nil? || new_resource.remote.empty?
new_resource.local_alias
end
def plugin_identifier
"#{remote_name}:#{new_resource.remote_tag}"
end
def local_name
new_resource.local_alias
end
def plugin_exists?(name)
Docker.connection.get("/plugins/#{name}/json")
true
rescue Docker::Error::NotFoundError
false
end
def plugin_enabled?(name)
JSON.parse(Docker.connection.get("/plugins/#{name}/json"))['Enabled']
end
def install_plugin
privileges = \
if new_resource.grant_privileges == true
# user gave a blanket statement about privileges; fetch required privileges from Docker
# we pass the identifier as both :name and :remote to accomodate different API versions
JSON.parse Docker.connection.get('/plugins/privileges',
name: plugin_identifier,
remote: plugin_identifier)
else
# user gave a specific list of privileges
new_resource.grant_privileges
end
# actually do the plugin install
body = ''
opts = { remote: plugin_identifier, name: local_name }
Chef::Log.info("pulling plugin #{opts} with privileges #{privileges}")
Docker.connection.post('/plugins/pull', opts,
body: JSON.generate(privileges),
response_block: response_block(body))
last_line = body.split("\n").select { |item| !item.empty? }.last
info = JSON.parse last_line
raise info['error'] if info.key?('error')
end
def response_block(body)
lambda do |chunk, _remaining, _total|
body << chunk
end
end
def configure_plugin
options_for_json = []
new_resource.options.each_pair do |k, v|
options_for_json.push("#{k}=#{v}")
end
Docker.connection.post("/plugins/#{local_name}/set", {}, body: JSON.generate(options_for_json))
end
def enable_plugin
Docker.connection.post("/plugins/#{local_name}/enable", timeout: new_resource.read_timeout)
end
def disable_plugin
Docker.connection.post("/plugins/#{local_name}/disable", timeout: new_resource.read_timeout)
end
def remove_plugin
Docker.connection.delete("/plugins/#{local_name}")
end
end
end
end

View File

@ -1,25 +1,29 @@
module DockerCookbook
class DockerRegistry < DockerBase
require 'docker'
require_relative 'helpers_auth'
resource_name :docker_registry
property :email, [String, nil]
property :password, [String, nil]
property :serveraddress, [String, nil], name_property: true
property :username, [String, nil]
property :email, String
property :password, String,
sensitive: true
property :serveraddress, String,
name_property: true
property :username, String
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
action :login do
tries = api_retries
tries = new_resource.api_retries
registry_host = parse_registry_host(serveraddress)
registry_host = parse_registry_host(new_resource.serveraddress)
(node.run_state['docker_auth'] ||= {})[registry_host] = {
'serveraddress' => registry_host,
'username' => username,
'password' => password,
'email' => email,
'username' => new_resource.username,
'password' => new_resource.password,
'email' => new_resource.email,
}
begin
@ -28,7 +32,7 @@ module DockerCookbook
body: node.run_state['docker_auth'][registry_host].to_json
)
rescue Docker::Error::ServerError, Docker::Error::UnauthorizedError
raise Docker::Error::AuthenticationError, "#{username} failed to authenticate with #{serveraddress}" if (tries -= 1) == 0
raise Docker::Error::AuthenticationError, "#{new_resource.username} failed to authenticate with #{new_resource.serveraddress}" if (tries -= 1) == 0
retry
end

View File

@ -8,37 +8,30 @@ module DockerCookbook
provides :docker_service
# installation type and service_manager
property :install_method, %w(binary script package tarball none auto), default: 'auto', desired_state: false
property :install_method, %w(script package tarball none auto), default: 'auto', desired_state: false
property :service_manager, %w(execute sysvinit upstart systemd auto), default: 'auto', desired_state: false
# docker_installation_script
property :repo, desired_state: false
property :script_url, String, desired_state: false
# docker_installation_binary and tarball
# docker_installation_tarball
property :checksum, String, desired_state: false
property :docker_bin, String, desired_state: false
property :source, String, desired_state: false
# docker_installation_package
property :package_version, String, desired_state: false
property :package_name, String, desired_state: false
property :setup_docker_repo, [TrueClass, FalseClass], desired_state: false
# binary, package and tarball
# package and tarball
property :version, String, desired_state: false
property :package_options, [String, nil], desired_state: false
property :package_options, String, desired_state: false
################
# Helper Methods
################
def validate_install_method
if property_is_set?(:version) &&
install_method != 'binary' &&
install_method != 'package' &&
install_method != 'tarball'
raise Chef::Exceptions::ValidationFailed, 'Version property only supported for binary, package and tarball installation methods'
end
end
def copy_properties_to(to, *properties)
properties = self.class.properties.keys if properties.empty?
properties.each do |p|
@ -51,18 +44,24 @@ module DockerCookbook
end
action_class.class_eval do
def validate_install_method
if new_resource.property_is_set?(:version) &&
new_resource.install_method != 'package' &&
new_resource.install_method != 'tarball'
raise Chef::Exceptions::ValidationFailed, 'Version property only supported for package and tarball installation methods'
end
end
def installation(&block)
case install_method
case new_resource.install_method
when 'auto'
install = docker_installation(name, &block)
when 'binary'
install = docker_installation_binary(name, &block)
install = docker_installation(new_resource.name, &block)
when 'script'
install = docker_installation_script(name, &block)
install = docker_installation_script(new_resource.name, &block)
when 'package'
install = docker_installation_package(name, &block)
install = docker_installation_package(new_resource.name, &block)
when 'tarball'
install = docker_installation_tarball(name, &block)
install = docker_installation_tarball(new_resource.name, &block)
when 'none'
Chef::Log.info('Skipping Docker installation. Assuming it was handled previously.')
return
@ -72,17 +71,17 @@ module DockerCookbook
end
def svc_manager(&block)
case service_manager
case new_resource.service_manager
when 'auto'
svc = docker_service_manager(name, &block)
svc = docker_service_manager(new_resource.name, &block)
when 'execute'
svc = docker_service_manager_execute(name, &block)
svc = docker_service_manager_execute(new_resource.name, &block)
when 'sysvinit'
svc = docker_service_manager_sysvinit(name, &block)
svc = docker_service_manager_sysvinit(new_resource.name, &block)
when 'upstart'
svc = docker_service_manager_upstart(name, &block)
svc = docker_service_manager_upstart(new_resource.name, &block)
when 'systemd'
svc = docker_service_manager_systemd(name, &block)
svc = docker_service_manager_systemd(new_resource.name, &block)
end
copy_properties_to(svc)
svc

View File

@ -3,7 +3,6 @@ module DockerCookbook
################
# Helper Methods
################
require 'docker'
require_relative 'helpers_service'
include DockerHelpers::Service
@ -16,64 +15,69 @@ module DockerCookbook
# register with the resource resolution system
provides :docker_service_manager
# Environment variables to docker service
property :env_vars, Hash
# daemon management
property :instance, String, name_property: true, required: true, desired_state: false
property :auto_restart, Boolean, default: false
property :api_cors_header, [String, nil]
property :bridge, [String, nil]
property :instance, String, name_property: true, desired_state: false
property :auto_restart, [TrueClass, FalseClass], default: false
property :api_cors_header, String
property :bridge, String
property :bip, [IPV4_ADDR, IPV4_CIDR, IPV6_ADDR, IPV6_CIDR, nil]
property :cluster_store, [String, nil]
property :cluster_advertise, [String, nil]
property :cluster_store_opts, ArrayType
property :debug, [Boolean, nil]
property :daemon, Boolean, default: true
property :dns, ArrayType
property :dns_search, [Array, nil]
property :cluster_store, String
property :cluster_advertise, String
property :cluster_store_opts, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :daemon, [TrueClass, FalseClass], default: true
property :data_root, String
property :debug, [TrueClass, FalseClass], default: false
property :dns, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :dns_search, Array
property :exec_driver, ['native', 'lxc', nil]
property :exec_opts, ArrayType
property :fixed_cidr, [String, nil]
property :fixed_cidr_v6, [String, nil]
property :group, [String, nil]
property :graph, [String, nil]
property :host, [String, Array], coerce: proc { |v| coerce_host(v) }
property :icc, [Boolean, nil]
property :exec_opts, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :fixed_cidr, String
property :fixed_cidr_v6, String
property :group, String, default: 'docker'
property :host, [String, Array], coerce: proc { |v| coerce_host(v) }, desired_state: false
property :icc, [TrueClass, FalseClass]
property :insecure_registry, [Array, String, nil], coerce: proc { |v| coerce_insecure_registry(v) }
property :ip, [IPV4_ADDR, IPV6_ADDR, nil]
property :ip_forward, [Boolean, nil]
property :ipv4_forward, Boolean, default: true
property :ipv6_forward, Boolean, default: true
property :ip_masq, [Boolean, nil]
property :iptables, [Boolean, nil]
property :ipv6, [Boolean, nil]
property :log_level, [:debug, :info, :warn, :error, :fatal, nil]
property :ip_forward, [TrueClass, FalseClass]
property :ipv4_forward, [TrueClass, FalseClass], default: true
property :ipv6_forward, [TrueClass, FalseClass], default: true
property :ip_masq, [TrueClass, FalseClass]
property :iptables, [TrueClass, FalseClass]
property :ipv6, [TrueClass, FalseClass]
property :default_ip_address_pool, String
property :log_level, %w(debug info warn error fatal)
property :labels, [String, Array], coerce: proc { |v| coerce_daemon_labels(v) }, desired_state: false
property :log_driver, %w( json-file syslog journald gelf fluentd awslogs splunk none )
property :log_opts, ArrayType
property :mount_flags, String, default: 'slave'
property :mtu, [String, nil]
property :log_driver, %w(json-file syslog journald gelf fluentd awslogs splunk none)
property :log_opts, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :mount_flags, String
property :mtu, String
property :pidfile, String, default: lazy { "/var/run/#{docker_name}.pid" }
property :registry_mirror, [String, nil]
property :storage_driver, ArrayType
property :selinux_enabled, [Boolean, nil]
property :storage_opts, ArrayType
property :default_ulimit, ArrayType
property :userland_proxy, [Boolean, nil]
property :disable_legacy_registry, [Boolean, nil]
property :userns_remap, [String, nil]
property :registry_mirror, String
property :storage_driver, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :selinux_enabled, [TrueClass, FalseClass]
property :storage_opts, Array
property :default_ulimit, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :userland_proxy, [TrueClass, FalseClass]
property :disable_legacy_registry, [TrueClass, FalseClass]
property :userns_remap, String
# These are options specific to systemd configuration such as
# LimitNOFILE or TasksMax that you may wannt to use to customize
# the environment in which Docker runs.
property :systemd_opts, ArrayType
property :systemd_opts, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
property :systemd_socket_opts, [String, Array], coerce: proc { |v| v.nil? ? nil : Array(v) }
# These are unvalidated daemon arguments passed in as a string.
property :misc_opts, [String, nil]
property :misc_opts, String
# environment variables to set before running daemon
property :http_proxy, [String, nil]
property :https_proxy, [String, nil]
property :no_proxy, [String, nil]
property :tmpdir, [String, nil]
property :http_proxy, String
property :https_proxy, String
property :no_proxy, String
property :tmpdir, String
# logging
property :logfile, String, default: '/var/log/docker.log'
@ -83,12 +87,9 @@ module DockerCookbook
allowed_actions :start, :stop, :restart
alias label labels
alias tlscacert tls_ca_cert
alias tlscert tls_server_cert
alias tlskey tls_server_key
alias tlsverify tls_verify
alias run_group group
alias_method :label, :labels
alias_method :run_group, :group
alias_method :graph, :data_root
declare_action_class.class_eval do
def libexec_dir
@ -112,7 +113,7 @@ module DockerCookbook
variables(
docker_cmd: docker_cmd,
libexec_dir: libexec_dir,
service_timeout: service_timeout
service_timeout: new_resource.service_timeout
)
cookbook 'docker'
action :create

View File

@ -2,8 +2,6 @@ module DockerCookbook
class DockerServiceManagerExecute < DockerServiceBase
resource_name :docker_service_manager_execute
provides :docker_service_manager, os: 'linux'
# Start the service
action :start do
# enable ipv4 forwarding

View File

@ -2,85 +2,78 @@ module DockerCookbook
class DockerServiceManagerSystemd < DockerServiceBase
resource_name :docker_service_manager_systemd
provides :docker_service_manager, platform: 'fedora'
provides :docker_service_manager, platform: %w(redhat centos scientific oracle) do |node| # ~FC005
node['platform_version'].to_f >= 7.0
end
provides :docker_service_manager, platform: 'debian' do |node|
node['platform_version'].to_f >= 8.0
end
provides :docker_service_manager, platform: 'ubuntu' do |node|
node['platform_version'].to_f >= 15.04
provides :docker_service_manager, os: 'linux' do |_node|
Chef::Platform::ServiceHelpers.service_resource_providers.include?(:systemd)
end
action :start do
create_docker_wait_ready
# stock systemd unit file
template "/lib/systemd/system/#{docker_name}.service" do
source 'systemd/docker.service.erb'
owner 'root'
group 'root'
mode '0644'
variables(
docker_name: docker_name,
docker_socket: connect_socket.sub(%r{unix://|fd://}, ''),
docker_mount_flags: mount_flags
)
cookbook 'docker'
action :create
not_if { docker_name == 'default' && ::File.exist?('/lib/systemd/system/docker.service') }
end
# stock systemd socket file
template "/lib/systemd/system/#{docker_name}.socket" do
source 'systemd/docker.socket.erb'
owner 'root'
group 'root'
mode '0644'
variables(
docker_name: docker_name,
docker_socket: connect_socket.sub(%r{unix://|fd://}, '')
)
cookbook 'docker'
action :create
not_if { docker_name == 'default' && ::File.exist?('/lib/systemd/system/docker.socket') }
end
template "/etc/systemd/system/#{docker_name}.service" do
source 'systemd/docker.service-override.erb'
owner 'root'
group 'root'
mode '0644'
variables(
config: new_resource,
docker_daemon_cmd: docker_daemon_cmd,
systemd_args: systemd_args,
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready",
docker_mount_flags: mount_flags
docker_name: docker_name,
docker_socket: connect_socket
)
action connect_socket.nil? ? :delete : :create
not_if { docker_name == 'default' && ::File.exist?('/lib/systemd/system/docker.socket') }
end
# stock systemd unit file
template "/lib/systemd/system/#{docker_name}.service" do
source 'systemd/docker.service.erb'
cookbook 'docker'
notifies :run, 'execute[systemctl daemon-reload]', :immediately
action :create
owner 'root'
group 'root'
mode '0644'
variables(
docker_name: docker_name,
docker_daemon_cmd: docker_daemon_cmd,
docker_socket: connect_socket
)
not_if { docker_name == 'default' && ::File.exist?('/lib/systemd/system/docker.service') }
end
# this overrides the main systemd socket
template "/etc/systemd/system/#{docker_name}.socket" do
source 'systemd/docker.socket-override.erb'
cookbook 'docker'
owner 'root'
group 'root'
mode '0644'
variables(
config: new_resource,
docker_name: docker_name,
docker_socket: connect_socket.sub(%r{unix://|fd://}, '')
docker_socket: connect_socket,
systemd_socket_args: systemd_socket_args
)
action connect_socket.nil? ? :delete : :create
end
# this overrides the main systemd service
template "/etc/systemd/system/#{docker_name}.service" do
source 'systemd/docker.service-override.erb'
cookbook 'docker'
owner 'root'
group 'root'
mode '0644'
variables(
config: new_resource,
docker_name: docker_name,
docker_socket: connect_socket,
docker_daemon_cmd: docker_daemon_cmd,
systemd_args: systemd_args,
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready",
env_vars: new_resource.env_vars
)
notifies :run, 'execute[systemctl daemon-reload]', :immediately
action :create
notifies :run, "execute[systemctl try-restart #{docker_name}]", :immediately
end
# avoid 'Unit file changed on disk' warning
@ -89,12 +82,19 @@ module DockerCookbook
action :nothing
end
# restart if changes in template resources
execute "systemctl try-restart #{docker_name}" do
command "/bin/systemctl try-restart #{docker_name}"
action :nothing
end
# service management resource
service docker_name do
provider Chef::Provider::Service::Systemd
supports status: true
action [:enable, :start]
only_if { ::File.exist?("/lib/systemd/system/#{docker_name}.service") }
retries 1
end
end

View File

@ -2,22 +2,14 @@ module DockerCookbook
class DockerServiceManagerSysvinitDebian < DockerServiceBase
resource_name :docker_service_manager_sysvinit_debian
provides :docker_service_manager, platform: 'debian' do |node|
provides :docker_service_manager, platform: 'debian' do |node| # ~FC005
node['platform_version'].to_f < 8.0
end
provides :docker_service_manager, platform: 'ubuntu' do |node|
node['platform_version'].to_f < 12.04
end
provides :docker_service_manager_sysvinit, platform: 'debian' do |node|
node['platform_version'].to_f < 8.0
end
provides :docker_service_manager_sysvinit, platform: 'ubuntu' do |node|
node['platform_version'].to_f < 12.04
end
action :start do
create_docker_wait_ready
create_init
@ -50,6 +42,7 @@ module DockerCookbook
template "/etc/init.d/#{docker_name}" do
source 'sysvinit/docker-debian.erb'
cookbook 'docker'
owner 'root'
group 'root'
mode '0755'
@ -57,20 +50,16 @@ module DockerCookbook
docker_name: docker_name,
dockerd_bin_link: dockerd_bin_link,
docker_daemon_arg: docker_daemon_arg,
docker_daemon_opts: docker_daemon_opts.join(' '),
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready"
)
cookbook 'docker'
action :create
end
template "/etc/default/#{docker_name}" do
source 'default/docker.erb'
variables(
config: new_resource,
dockerd_bin_link: dockerd_bin_link,
docker_daemon_opts: docker_daemon_opts.join(' ')
)
cookbook 'docker'
variables(config: new_resource)
action :create
end
end

View File

@ -4,13 +4,13 @@ module DockerCookbook
provides :docker_service_manager, platform: 'amazon'
provides :docker_service_manager, platform: 'suse'
provides :docker_service_manager, platform: %w(redhat centos scientific oracle) do |node| # ~FC005
provides :docker_service_manager, platform_family: 'rhel' do |node|
node['platform_version'].to_f <= 7.0
end
provides :docker_service_manager_sysvinit, platform: 'amazon'
provides :docker_service_manager_sysvinit, platform: 'suse'
provides :docker_service_manager_sysvinit, platform: %w(redhat centos scientific oracle) do |node| # ~FC005
provides :docker_service_manager_sysvinit, platform_family: 'rhel' do |node|
node['platform_version'].to_f <= 7.0
end
@ -41,10 +41,10 @@ module DockerCookbook
link dockerd_bin_link do
to dockerd_bin
link_type :hard
action :create
end
template "/etc/init.d/#{docker_name}" do
cookbook 'docker'
source 'sysvinit/docker-rhel.erb'
owner 'root'
group 'root'
@ -55,18 +55,17 @@ module DockerCookbook
docker_daemon_cmd: docker_daemon_cmd,
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready"
)
cookbook 'docker'
action :create
notifies :restart, "service[#{docker_name}]", :immediately
end
template "/etc/sysconfig/#{docker_name}" do
source 'sysconfig/docker.erb'
variables(
config: new_resource,
docker_daemon_opts: docker_daemon_opts.join(' ')
)
cookbook 'docker'
action :create
source 'sysconfig/docker.erb'
owner 'root'
group 'root'
mode '0644'
variables(config: new_resource)
notifies :restart, "service[#{docker_name}]", :immediately
end
end

View File

@ -2,8 +2,10 @@ module DockerCookbook
class DockerServiceManagerUpstart < DockerServiceBase
resource_name :docker_service_manager_upstart
provides :docker_service_manager, platform: 'ubuntu'
provides :docker_service_manager, platform: 'linuxmint'
provides :docker_service_manager, platform_family: 'debian' do |_node|
Chef::Platform::ServiceHelpers.service_resource_providers.include?(:upstart) &&
!Chef::Platform::ServiceHelpers.service_resource_providers.include?(:systemd)
end
action :start do
create_docker_wait_ready
@ -16,33 +18,30 @@ module DockerCookbook
template "/etc/init/#{docker_name}.conf" do
source 'upstart/docker.conf.erb'
cookbook 'docker'
owner 'root'
group 'root'
mode '0644'
variables(
docker_name: docker_name,
dockerd_bin_link: dockerd_bin_link,
docker_daemon_arg: docker_daemon_arg,
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready"
docker_daemon_cmd: [dockerd_bin_link, docker_daemon_arg, docker_daemon_opts].join(' '),
docker_raw_logs_arg: docker_raw_logs_arg,
docker_wait_ready: "#{libexec_dir}/#{docker_name}-wait-ready",
docker_socket: connect_socket
)
cookbook 'docker'
action :create
notifies :stop, "service[#{docker_name}]", :immediately
notifies :start, "service[#{docker_name}]", :immediately
end
template "/etc/default/#{docker_name}" do
source 'default/docker.erb'
variables(
config: new_resource,
dockerd_bin_link: dockerd_bin_link,
docker_daemon_opts: docker_daemon_opts.join(' ')
)
cookbook 'docker'
action :create
variables(config: new_resource)
notifies :restart, "service[#{docker_name}]", :immediately
end
service docker_name do
provider Chef::Provider::Service::Upstart
supports status: true
supports status: true, restart: false
action :start
end
end
@ -50,7 +49,7 @@ module DockerCookbook
action :stop do
service docker_name do
provider Chef::Provider::Service::Upstart
supports status: true
supports status: true, restart: false
action :stop
end
end

View File

@ -6,18 +6,18 @@ module DockerCookbook
property :target_tag, String
property :to_repo, String
property :to_tag, String
property :force, Boolean, default: false
property :force, [TrueClass, FalseClass], default: false, desired_state: false
#########
# Actions
#########
action :tag do
return if force == false && Docker::Image.exist?("#{to_repo}:#{to_tag}")
return if new_resource.force == false && Docker::Image.exist?("#{new_resource.to_repo}:#{new_resource.to_tag}")
begin
converge_by "update #{target_repo}:#{target_tag} to #{to_repo}:#{to_tag}" do
i = Docker::Image.get("#{target_repo}:#{target_tag}")
i.tag('repo' => to_repo, 'tag' => to_tag, 'force' => force)
converge_by "update #{new_resource.target_repo}:#{new_resource.target_tag} to #{new_resource.to_repo}:#{new_resource.to_tag}" do
i = Docker::Image.get("#{new_resource.target_repo}:#{new_resource.target_tag}")
i.tag('repo' => new_resource.to_repo, 'tag' => new_resource.to_tag, 'force' => new_resource.force)
end
rescue Docker::Error => e
raise e.message

View File

@ -1,12 +1,10 @@
module DockerCookbook
class DockerVolume < DockerBase
require 'docker'
resource_name :docker_volume
property :driver, String, desired_state: false
property :host, [String, nil], default: lazy { default_host }, desired_state: false
property :opts, [String, Array, nil], desired_state: false
property :host, [String, nil], default: lazy { ENV['DOCKER_HOST'] }, desired_state: false
property :opts, Hash, desired_state: false
property :volume, Docker::Volume, desired_state: false
property :volume_name, String, name_property: true
@ -19,15 +17,18 @@ module DockerCookbook
end
action :create do
converge_by "creating volume #{volume_name}" do
Docker::Volume.create(volume_name, {}, connection)
end if volume.nil?
converge_by "creating volume #{new_resource.volume_name}" do
opts = {}
opts['Driver'] = driver if property_is_set?(:driver)
opts['DriverOpts'] = opts if property_is_set?(:opts)
Docker::Volume.create(new_resource.volume_name, opts, connection)
end if current_resource.nil?
end
action :remove do
converge_by "removing volume #{volume_name}" do
volume.remove
end unless volume.nil?
converge_by "removing volume #{new_resource.volume_name}" do
current_resource.volume.remove
end unless current_resource.nil?
end
end
end

View File

@ -1,10 +0,0 @@
module DockerCookbook
module DockerHelpers
module Authentication
# https://github.com/docker/docker/blob/4fcb9ac40ce33c4d6e08d5669af6be5e076e2574/registry/auth.go#L231
def parse_registry_host(val)
val.sub(%r{https?://}, '').split('/').first
end
end
end
end

View File

@ -1,110 +0,0 @@
module DockerCookbook
module DockerHelpers
module Base
require 'shellwords'
# Misc
def to_snake_case(name)
# ExposedPorts -> _exposed_ports
name = name.gsub(/[A-Z]/) { |x| "_#{x.downcase}" }
# _exposed_ports -> exposed_ports
name = name[1..-1] if name.start_with?('_')
name
end
##########
# coersion
##########
def coerce_labels(v)
case v
when Hash, nil
v
else
Array(v).each_with_object({}) do |label, h|
parts = label.split(':')
h[parts[0]] = parts[1]
end
end
end
def coerce_shell_command(v)
return nil if v.nil?
return DockerBase::ShellCommandString.new(
::Shellwords.join(v)
) if v.is_a?(Array)
DockerBase::ShellCommandString.new(v)
end
################
# Helper methods
################
def api_version
@api_version ||= Docker.version(connection)['ApiVersion']
end
def connection
@connection ||= begin
opts = {}
opts[:read_timeout] = read_timeout if read_timeout
opts[:write_timeout] = write_timeout if write_timeout
if host =~ /^tcp:/
opts[:scheme] = 'https' if tls || !tls_verify.nil?
opts[:ssl_ca_file] = tls_ca_cert if tls_ca_cert
opts[:client_cert] = tls_client_cert if tls_client_cert
opts[:client_key] = tls_client_key if tls_client_key
end
Docker::Connection.new(host || Docker.url, opts)
end
end
def with_retries(&_block)
tries = api_retries
begin
yield
# Only catch errors that can be fixed with retries.
rescue Docker::Error::ServerError, # 404
Docker::Error::UnexpectedResponseError, # 400
Docker::Error::TimeoutError,
Docker::Error::IOError
tries -= 1
retry if tries > 0
raise
end
end
def call_action(_action)
new_resource.run_action
end
def default_host
return nil unless ENV['DOCKER_HOST']
ENV['DOCKER_HOST']
end
def default_tls
return nil unless ENV['DOCKER_TLS']
ENV['DOCKER_TLS']
end
def default_tls_verify
return nil unless ENV['DOCKER_TLS_VERIFY']
ENV['DOCKER_TLS_VERIFY']
end
def default_tls_cert_path(v)
return nil unless ENV['DOCKER_CERT_PATH']
case v
when 'ca'
"#{ENV['DOCKER_CERT_PATH']}/ca.pem"
when 'cert'
"#{ENV['DOCKER_CERT_PATH']}/cert.pem"
when 'key'
"#{ENV['DOCKER_CERT_PATH']}/key.pem"
end
end
end
end
end

View File

@ -1,236 +0,0 @@
module DockerCookbook
module DockerHelpers
module Container
def coerce_links(v)
case v
when DockerBase::UnorderedArray, nil
v
else
return nil if v.empty?
# Parse docker input of /source:/container_name/dest into source:dest
DockerBase::UnorderedArray.new(Array(v)).map! do |link|
if link =~ %r{^/(?<source>.+):/#{name}/(?<dest>.+)}
link = "#{Regexp.last_match[:source]}:#{Regexp.last_match[:dest]}"
end
link
end
end
end
def coerce_log_opts(v)
case v
when Hash, nil
v
else
Array(v).each_with_object({}) do |log_opt, memo|
key, value = log_opt.split('=', 2)
memo[key] = value
end
end
end
def coerce_ulimits(v)
return v if v.nil?
Array(v).map do |u|
u = "#{u['Name']}=#{u['Soft']}:#{u['Hard']}" if u.is_a?(Hash)
u
end
end
def coerce_volumes(v)
case v
when DockerBase::PartialHash, nil
v
when Hash
DockerBase::PartialHash[v]
else
b = []
v = Array(v).to_a # in case v.is_A?(Chef::Node::ImmutableArray)
v.delete_if do |x|
parts = x.split(':')
b << x if parts.length > 1
end
b = nil if b.empty?
volumes_binds b
return DockerBase::PartialHash.new if v.empty?
v.each_with_object(DockerBase::PartialHash.new) { |volume, h| h[volume] = {} }
end
end
def state
# Always return the latest state, see #510
return Docker::Container.get(container_name, {}, connection).info['State']
rescue
return {}
end
def wait_running_state(v)
tries = running_wait_time
tries.times do
return if state['Running'] == v
sleep 1
end
return if state['Running'] == v
# Container failed to reach correct state: Throw an error
desired_state_str = v ? 'running' : 'not running'
raise Docker::Error::TimeoutError, "Container #{container_name} failed to change to #{desired_state_str} state after #{tries} seconds"
end
def port(v = nil)
return @port if v.nil?
exposed_ports coerce_exposed_ports(v)
port_bindings coerce_port_bindings(v)
@port = v
@port
end
def parse_port(v)
parts = v.split(':')
case parts.length
when 3
host_ip = parts[0]
host_port = parts[1]
container_port = parts[2]
when 2
host_ip = '0.0.0.0'
host_port = parts[0]
container_port = parts[1]
when 1
host_ip = ''
host_port = ''
container_port = parts[0]
end
port_range, protocol = container_port.split('/')
if port_range.include?('-')
port_range = container_port.split('-')
port_range.map!(&:to_i)
Chef::Log.fatal("FATAL: Invalid port range! #{container_port}") if port_range[0] > port_range[1]
port_range = (port_range[0]..port_range[1]).to_a
end
# qualify the port-binding protocol even when it is implicitly tcp #427.
protocol = 'tcp' if protocol.nil?
Array(port_range).map do |port|
{
'host_ip' => host_ip,
'host_port' => host_port,
'container_port' => "#{port}/#{protocol}",
}
end
end
def coerce_exposed_ports(v)
case v
when Hash, nil
v
else
x = Array(v).map { |a| parse_port(a) }
x.flatten!
x.each_with_object({}) do |y, h|
h[y['container_port']] = {}
end
end
end
def coerce_port_bindings(v)
case v
when Hash, nil
v
else
x = Array(v).map { |a| parse_port(a) }
x.flatten!
x.each_with_object({}) do |y, h|
h[y['container_port']] = [] unless h[y['container_port']]
h[y['container_port']] << {
'HostIp' => y['host_ip'],
'HostPort' => y['host_port'],
}
end
end
end
# log_driver and log_opts really handle this
def log_config(value = Chef::NOT_PASSED)
if value != Chef::NOT_PASSED
@log_config = value
log_driver value['Type']
log_opts value['Config']
end
return @log_config if defined?(@log_config)
def_logcfg = {}
def_logcfg['Type'] = log_driver if property_is_set?(:log_driver)
def_logcfg['Config'] = log_opts if property_is_set?(:log_opts)
def_logcfg = nil if def_logcfg.empty?
def_logcfg
end
# TODO: test image property in serverspec and kitchen, not only in rspec
# for full specs of image parsing, see spec/helpers_container_spec.rb
#
# If you say: `repo 'blah'`
# Image will be: `blah:latest`
#
# If you say: `repo 'blah'; tag '3.1'`
# Image will be: `blah:3.1`
#
# If you say: `image 'blah'`
# Repo will be: `blah`
# Tag will be: `latest`
#
# If you say: `image 'blah:3.1'`
# Repo will be: `blah`
# Tag will be: `3.1`
#
# If you say: `image 'repo/blah'`
# Repo will be: `repo/blah`
# Tag will be: `latest`
#
# If you say: `image 'repo/blah:3.1'`
# Repo will be: `repo/blah`
# Tag will be: `3.1`
#
# If you say: `image 'repo:1337/blah'`
# Repo will be: `repo:1337/blah`
# Tag will be: `latest'
#
# If you say: `image 'repo:1337/blah:3.1'`
# Repo will be: `repo:1337/blah`
# Tag will be: `3.1`
#
def image(image = nil)
if image
if image.include?('/')
# pathological case, a ':' may be present which starts the 'port'
# part of the image name and not a tag. example: 'host:1337/blah'
# fortunately, tags are only found in the 'basename' part of image
# so we can split on '/' and rebuild once the tag has been parsed.
dirname, _, basename = image.rpartition('/')
r, t = basename.split(':', 2)
r = [dirname, r].join('/')
else
# normal case, the ':' starts the tag part
r, t = image.split(':', 2)
end
repo r
tag t if t
end
"#{repo}:#{tag}"
end
def to_shellwords(command)
return nil if command.nil?
Shellwords.shellwords(command)
end
def ulimits_to_hash
return nil if ulimits.nil?
ulimits.map do |u|
name = u.split('=')[0]
soft = u.split('=')[1].split(':')[0]
hard = u.split('=')[1].split(':')[1]
{ 'Name' => name, 'Soft' => soft.to_i, 'Hard' => hard.to_i }
end
end
end
end
end

View File

@ -1,104 +0,0 @@
module DockerCookbook
module DockerHelpers
module Image
################
# Helper methods
################
def build_from_directory
i = Docker::Image.build_from_dir(
source,
{
'nocache' => nocache,
'rm' => rm,
},
connection
)
i.tag('repo' => repo, 'tag' => tag, 'force' => force)
end
def build_from_dockerfile
i = Docker::Image.build(
IO.read(source),
{
'nocache' => nocache,
'rm' => rm,
},
connection
)
i.tag('repo' => repo, 'tag' => tag, 'force' => force)
end
def build_from_tar
i = Docker::Image.build_from_tar(
::File.open(source, 'r'),
{
'nocache' => nocache,
'rm' => rm,
},
connection
)
i.tag('repo' => repo, 'tag' => tag, 'force' => force)
end
def build_image
if ::File.directory?(source)
build_from_directory
elsif ::File.extname(source) == '.tar'
build_from_tar
else
build_from_dockerfile
end
end
def image_identifier
"#{repo}:#{tag}"
end
def import_image
with_retries do
i = Docker::Image.import(source, {}, connection)
i.tag('repo' => repo, 'tag' => tag, 'force' => force)
end
end
def pull_image
with_retries do
registry_host = parse_registry_host(repo)
creds = node.run_state['docker_auth'] && node.run_state['docker_auth'][registry_host] || (node.run_state['docker_auth'] ||= {})['index.docker.io']
original_image = Docker::Image.get(image_identifier, {}, connection) if Docker::Image.exist?(image_identifier, {}, connection)
new_image = Docker::Image.create({ 'fromImage' => image_identifier }, creds, connection)
!(original_image && original_image.id.start_with?(new_image.id))
end
end
def push_image
with_retries do
i = Docker::Image.get(image_identifier, {}, connection)
i.push
end
end
def remove_image
with_retries do
i = Docker::Image.get(image_identifier, {}, connection)
i.remove(force: force, noprune: noprune)
end
end
def save_image
with_retries do
Docker::Image.save(repo, destination, connection)
end
end
def load_image
with_retries do
Docker::Image.load(source, {}, connection)
end
end
end
end
end

View File

@ -1,66 +0,0 @@
module DockerCookbook
module DockerHelpers
module InstallationBinary
def docker_bin
'/usr/bin/docker'
end
def docker_kernel
node['kernel']['name']
end
def docker_arch
node['kernel']['machine']
end
def default_source
"https://get.docker.com/builds/#{docker_kernel}/#{docker_arch}/docker-#{version}"
end
def default_checksum
case docker_kernel
when 'Darwin'
case version
when '1.6.0' then '9e960e925561b4ec2b81f52b6151cd129739c1f4fba91ce94bdc0333d7d98c38'
when '1.6.2' then 'f29b8b2185c291bd276f7cdac45a674f904e964426d5b969fda7b8ef6b8ab557'
when '1.7.0' then '1c8ee59249fdde401afebc9a079cb75d7674f03d2491789fb45c88020a8c5783'
when '1.7.1' then 'b8209b4382d0b4292c756dd055c12e5efacec2055d5900ac91efc8e81d317cf9'
when '1.8.1' then '0f5db35127cf14b57614ad7513296be600ddaa79182d8d118d095cb90c721e3a'
when '1.8.2' then 'cef593612752e5a50bd075931956075a534b293b7002892072397c3093fe11a6'
when '1.8.3' then 'b5a63a0e6fb393de8c3543c83433224796c7545901963ff3e3e9a41b1430c9cd'
when '1.9.0' then '91a8701e41a479def5371a333657c58c36478602e1f5eb1835457a3880232a2e'
when '1.9.1' then '8750ccc2098ec94ef7db110e0016ab02cfa47a1a76f0deb3faa50335b5ec0df9'
when '1.10.0' then 'f8dc0c7ef2a7efbe0e062017822066e55a40c752b9e92a636359f59ef562d79f'
when '1.10.1' then 'de4057057acd259ec38b5244a40d806993e2ca219e9869ace133fad0e09cedf2'
when '1.10.2' then '29249598587ad8f8597235bbeb11a11888fffb977b8089ea80b5ac5267ba9f2e'
end
when 'Linux'
case version
when '1.6.0' then '526fbd15dc6bcf2f24f99959d998d080136e290bbb017624a5a3821b63916ae8'
when '1.6.2' then 'e131b2d78d9f9e51b0e5ca8df632ac0a1d48bcba92036d0c839e371d6cf960ec'
when '1.7.1' then '4d535a62882f2123fb9545a5d140a6a2ccc7bfc7a3c0ec5361d33e498e4876d5'
when '1.8.1' then '843f90f5001e87d639df82441342e6d4c53886c65f72a5cc4765a7ba3ad4fc57'
when '1.8.2' then '97a3f5924b0b831a310efa8bf0a4c91956cd6387c4a8667d27e2b2dd3da67e4d'
when '1.8.3' then 'f024bc65c45a3778cf07213d26016075e8172de8f6e4b5702bedde06c241650f'
when '1.9.0' then '5d46455aac507e231fd2a558459779f1994f7151d6cb027efabfa36f568cf017'
when '1.9.1' then '52286a92999f003e1129422e78be3e1049f963be1888afc3c9a99d5a9af04666'
when '1.10.0' then 'a66b20423b7d849aa8ef448b98b41d18c45a30bf3fe952cc2ba4760600b18087'
when '1.10.1' then 'de4057057acd259ec38b5244a40d806993e2ca219e9869ace133fad0e09cedf2'
when '1.10.2' then '3fcac4f30e1c1a346c52ba33104175ae4ccbd9b9dbb947f56a0a32c9e401b768'
end
end
end
def default_version
if node['platform'] == 'amazon' ||
node['platform'] == 'ubuntu' && node['platform_version'].to_f < 15.04 ||
node['platform_family'] == 'rhel' && node['platform_version'].to_i < 7 ||
node['platform_family'] == 'debian' && node['platform_version'].to_i <= 7
'1.6.2'
else
'1.10.2'
end
end
end
end
end

View File

@ -1,105 +0,0 @@
module DockerCookbook
module DockerHelpers
module InstallationPackage
def el6?
return true if node['platform_family'] == 'rhel' && node['platform_version'].to_i == 6
false
end
def el7?
return true if node['platform_family'] == 'rhel' && node['platform_version'].to_i == 7
false
end
def fedora?
return true if node['platform'] == 'fedora'
false
end
def wheezy?
return true if node['platform'] == 'debian' && node['platform_version'].to_i == 7
false
end
def jesse?
return true if node['platform'] == 'debian' && node['platform_version'].to_i == 8
false
end
def precise?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '12.04'
false
end
def trusty?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '14.04'
return true if node['platform'] == 'linuxmint' && node['platform_version'] =~ /^17\.[0-9]$/
false
end
def vivid?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '15.04'
false
end
def wily?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '15.10'
false
end
def xenial?
return true if node['platform'] == 'ubuntu' && node['platform_version'] == '16.04'
false
end
def amazon?
return true if node['platform'] == 'amazon'
false
end
# https://github.com/chef/chef/issues/4103
def version_string(v)
ubuntu_prefix = if Gem::Version.new(v) > Gem::Version.new('1.12.3')
'ubuntu-'
else
''
end
debian_prefix = if Gem::Version.new(v) > Gem::Version.new('1.12.3')
'debian-'
else
''
end
return "#{v}-1.el6" if el6?
return "#{v}-1.el7.centos" if el7?
return "#{v}-1.17.amzn1" if amazon?
return "#{v}-1.fc#{node['platform_version'].to_i}" if fedora?
return "#{v}-0~#{debian_prefix}wheezy" if wheezy?
return "#{v}-0~#{debian_prefix}jessie" if jesse?
return "#{v}-0~#{ubuntu_prefix}precise" if precise?
return "#{v}-0~#{ubuntu_prefix}trusty" if trusty?
return "#{v}-0~#{ubuntu_prefix}vivid" if vivid?
return "#{v}-0~#{ubuntu_prefix}wily" if wily?
return "#{v}-0~#{ubuntu_prefix}xenial" if xenial?
v
end
def default_docker_version
return '1.7.1' if el6?
return '1.9.1' if vivid?
return '1.12.6' if amazon?
'1.13.1'
end
def default_package_name
return 'docker' if amazon?
'docker-engine'
end
def docker_bin
'/usr/bin/docker'
end
end
end
end

View File

@ -1,50 +0,0 @@
module DockerCookbook
module DockerHelpers
module InstallationTarball
def docker_bin_prefix
'/usr/bin'
end
def docker_bin
"#{docker_bin_prefix}/docker"
end
def docker_tarball
"#{Chef::Config[:file_cache_path]}/docker-#{version}.tgz"
end
def docker_kernel
node['kernel']['name']
end
def docker_arch
node['kernel']['machine']
end
def default_source
"https://get.docker.com/builds/#{docker_kernel}/#{docker_arch}/docker-#{version}.tgz"
end
def default_checksum
case docker_kernel
when 'Darwin'
case version
when '1.11.0' then '25e4f5f37e2e17beb20e5a468674750350824059bdeeca09d6a941bca8fc4f73'
when '1.11.1' then '6d35487fbcc7e3f722f3d5f3e5c070a41d87c88e3770f52ae28460a689c40efd'
when '1.11.2' then 'f44da1025c355c51ae6e150fcc0672b87a738b9c8ad98e5fa6091502211da19a'
end
when 'Linux'
case version
when '1.11.0' then '87331b3b75d32d3de5d507db9a19a24dd30ff9b2eb6a5a9bdfaba954da15e16b'
when '1.11.1' then '893e3c6e89c0cd2c5f1e51ea41bc2dd97f5e791fcfa3cee28445df277836339d'
when '1.11.2' then '8c2e0c35e3cda11706f54b2d46c2521a6e9026a7b13c7d4b8ae1f3a706fc55e1'
end
end
end
def default_version
'1.11.2'
end
end
end
end

View File

@ -1,118 +0,0 @@
module DockerCookbook
module DockerHelpers
module Network
require 'ipaddr'
###################
# property coersion
###################
def coerce_auxiliary_addresses(v)
ray = []
Array(v).each do |e|
case e
when String, Array, nil
ray += Array(e)
when Hash
e.each { |key, val| ray << "#{key}=#{val}" }
end
end
ray.length == 1 ? ray[0] : ray
end
def coerce_gateway(v)
case v
when String
v.split('/')[0]
when Array
ray = Array(v).map { |a| a.split('/')[0] }
ray.length == 1 ? ray[0] : ray
end
end
def coerce_subnet(v)
Array(v).length == 1 ? Array(v)[0] : v
end
def coerce_ip_range(v)
Array(v).length == 1 ? Array(v)[0] : v
end
######
# IPAM
######
def consolidate_ipam(subnets, ranges, gateways, auxaddrs)
subnets = Array(subnets)
ranges = Array(ranges)
gateways = Array(gateways)
auxaddrs = Array(auxaddrs)
subnets = [] if subnets.empty?
ranges = [] if ranges.empty?
gateways = [] if gateways.empty?
auxaddrs = [] if auxaddrs.empty?
if subnets.size < ranges.size || subnets.size < gateways.size
raise 'every ip-range or gateway myust have a corresponding subnet'
end
data = {}
# Check overlapping subnets
subnets.each do |s|
data.each do |k, _|
if subnet_matches(s, k) || subnet_matches(k, s)
raise 'multiple overlapping subnet configuration is not supported'
end
end
data[s] = { 'Subnet' => s, 'AuxiliaryAddresses' => {} }
end
ranges.each do |r|
match = false
subnets.each do |s|
ok = subnet_matches(s, r)
next unless ok
if data[s].fetch('IPRange', '') != ''
raise 'cannot configure multiple ranges on the same subnet'
end
data[s]['IPRange'] = r
match = true
end
raise "no matching subnet for range #{r}" unless match
end
gateways.each do |g|
match = false
subnets.each do |s|
ok = subnet_matches(s, g)
next unless ok
unless data[s].fetch('Gateway', '').empty?
raise "cannot configure multiple gateways (#{g}, #{data[s]['Gateway']}) for the same subnet (#{s})"
end
data[s]['Gateway'] = g
match = true
end
raise "no matching subnet for gateway #{s}" unless match
end
auxaddrs.each do |aa|
key, a = aa.split('=')
match = false
subnets.each do |s|
# require 'pry' ; binding.pry
ok = subnet_matches(s, a)
next unless ok
data[s]['AuxiliaryAddresses'][key] = a
match = true
end
raise "no matching subnet for aux-address #{a}" unless match
end
data.values
end
def subnet_matches(subnet, data)
IPAddr.new(subnet).include?(IPAddr.new(data))
end
end
end
end

View File

@ -16,13 +16,13 @@ fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|
([0-9a-fA-F]{1,4}:){1,4}:
((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}
(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])
)/
)/.freeze
IPV4_ADDR ||= /((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])/
IPV4_ADDR ||= /((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])/.freeze
IPV6_CIDR ||= /s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:)))(%.+)?s*/
IPV6_CIDR ||= /s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:)))(%.+)?s*/.freeze
IPV4_CIDR ||= %r{(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|[1-2][0-9]|3[0-2]))}
IPV4_CIDR ||= %r{(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|[1-2][0-9]|3[0-2]))}.freeze
module DockerCookbook
module DockerHelpers
@ -58,13 +58,13 @@ module DockerCookbook
c_a = 3 if a =~ %r{^tcp://127.0.0.1:}
c_a = 4 if a =~ %r{^tcp://(192\.168|10\.|172\.1[6789]\.|172\.2[0-9]\.|172\.3[01]\.).*:}
c_a = 5 if a =~ %r{^tcp://0.0.0.0:}
c_a = 6 unless c_a
c_a ||= 6
c_b = 1 if b =~ /^unix:/
c_b = 2 if b =~ /^fd:/
c_b = 3 if b =~ %r{^tcp://127.0.0.1:}
c_b = 4 if b =~ %r{^tcp://(192\.168|10\.|172\.1[6789]\.|172\.2[0-9]\.|172\.3[01]\.).*:}
c_b = 5 if b =~ %r{^tcp://0.0.0.0:}
c_b = 6 unless c_b
c_b ||= 6
c_a <=> c_b
end
if sorted.first =~ %r{^tcp://0.0.0.0:}
@ -76,16 +76,17 @@ module DockerCookbook
def connect_socket
return "/var/run/#{docker_name}.sock" unless host
return nil if host.grep(%r{unix://|fd://}).empty?
sorted = coerce_host(host).sort do |a, b|
c_a = 1 if a =~ /^unix:/
c_a = 2 if a =~ /^fd:/
c_a = 3 unless c_a
c_a ||= 3
c_b = 1 if b =~ /^unix:/
c_b = 2 if b =~ /^fd:/
c_b = 3 unless c_b
c_b ||= 3
c_a <=> c_b
end
sorted.first
sorted.first.sub(%r{unix://|fd://}, '')
end
def coerce_host(v)
@ -138,6 +139,14 @@ module DockerCookbook
end
end
def docker_raw_logs_arg
if Gem::Version.new(docker_major_version) < Gem::Version.new('1.11')
''
else
'--raw-logs'
end
end
def docker_daemon_cmd
[dockerd_bin, docker_daemon_arg, docker_daemon_opts].join(' ')
end
@ -165,6 +174,12 @@ module DockerCookbook
opts
end
def systemd_socket_args
opts = ''
systemd_socket_opts.each { |systemd_socket_opt| opts << "#{systemd_socket_opt}\n" } if systemd_socket_opts
opts
end
def docker_daemon_opts
opts = []
opts << "--api-cors-header=#{api_cors_header}" if api_cors_header
@ -182,8 +197,9 @@ module DockerCookbook
opts << "--fixed-cidr=#{fixed_cidr}" if fixed_cidr
opts << "--fixed-cidr-v6=#{fixed_cidr_v6}" if fixed_cidr_v6
opts << "--group=#{group}" if group
opts << "--graph=#{graph}" if graph
host.each { |h| opts << "-H #{h}" } if host
opts << "--data-root=#{data_root}" if data_root
opts << "--default-address-pool=#{default_ip_address_pool}" unless default_ip_address_pool.nil?
host.each { |h| opts << "--host #{h}" } if host
opts << "--icc=#{icc}" unless icc.nil?
insecure_registry.each { |i| opts << "--insecure-registry=#{i}" } if insecure_registry
opts << "--ip=#{ip}" if ip
@ -194,7 +210,7 @@ module DockerCookbook
opts << "--log-level=#{log_level}" if log_level
labels.each { |l| opts << "--label=#{l}" } if labels
opts << "--log-driver=#{log_driver}" if log_driver
log_opts.each { |log_opt| opts << "--log-opt #{log_opt}" } if log_opts
log_opts.each { |log_opt| opts << "--log-opt '#{log_opt}'" } if log_opts
opts << "--mtu=#{mtu}" if mtu
opts << "--pidfile=#{pidfile}" if pidfile
opts << "--registry-mirror=#{registry_mirror}" if registry_mirror
@ -218,6 +234,6 @@ module DockerCookbook
return true if o.stdout =~ /CONTAINER/
false
end
end
end unless defined?(DockerCookbook::DockerHelpers::Service)
end
end

View File

@ -1,339 +0,0 @@
if defined?(ChefSpec)
#####################
# docker_installation
#####################
ChefSpec.define_matcher :docker_installation
def create_docker_installation(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation, :create, resource_name)
end
def delete_docker_installation(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation, :delete, resource_name)
end
def create_docker_installation_binary(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_binary, :create, resource_name)
end
def delete_docker_installation_binary(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_binary, :delete, resource_name)
end
def create_docker_installation_script(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_script, :create, resource_name)
end
def delete_docker_installation_script(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_script, :delete, resource_name)
end
def create_docker_installation_package(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_package, :create, resource_name)
end
def delete_docker_installation_package(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_package, :delete, resource_name)
end
def create_docker_installation_tarball(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_tarball, :create, resource_name)
end
def delete_docker_installation_tarball(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_installation_tarball, :delete, resource_name)
end
################
# docker_service
################
ChefSpec.define_matcher :docker_service
def create_docker_service(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service, :create, resource_name)
end
def delete_docker_service(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service, :delete, resource_name)
end
def start_docker_service(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service, :start, resource_name)
end
def stop_docker_service(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service, :stop, resource_name)
end
def restart_docker_service(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service, :restart, resource_name)
end
########################
# docker_service_manager
########################
ChefSpec.define_matcher :docker_service_manager
def create_docker_service_manager(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager, :create, resource_name)
end
def delete_docker_service_manager(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager, :delete, resource_name)
end
def start_docker_service_manager(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager, :start, resource_name)
end
def stop_docker_service_manager(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager, :stop, resource_name)
end
def restart_docker_service_manager(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager, :restart, resource_name)
end
def create_docker_service_manager_execute(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_execute, :create, resource_name)
end
def delete_docker_service_manager_execute(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_execute, :delete, resource_name)
end
def start_docker_service_manager_execute(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_execute, :start, resource_name)
end
def stop_docker_service_manager_execute(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_execute, :stop, resource_name)
end
def restart_docker_service_manager_execute(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_execute, :restart, resource_name)
end
def create_docker_service_manager_sysvinit(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_sysvinit, :create, resource_name)
end
def delete_docker_service_manager_sysvinit(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_sysvinit, :delete, resource_name)
end
def start_docker_service_manager_sysvinit(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_sysvinit, :start, resource_name)
end
def stop_docker_service_manager_sysvinit(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_sysvinit, :stop, resource_name)
end
def restart_docker_service_manager_sysvinit(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_sysvinit, :restart, resource_name)
end
def create_docker_service_manager_upstart(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_upstart, :create, resource_name)
end
def delete_docker_service_manager_upstart(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_upstart, :delete, resource_name)
end
def start_docker_service_manager_upstart(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_upstart, :start, resource_name)
end
def stop_docker_service_manager_upstart(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_upstart, :stop, resource_name)
end
def restart_docker_service_manager_upstart(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_upstart, :restart, resource_name)
end
def create_docker_service_manager_systemd(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_systemd, :create, resource_name)
end
def delete_docker_service_manager_systemd(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_systemd, :delete, resource_name)
end
def start_docker_service_manager_systemd(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_systemd, :start, resource_name)
end
def stop_docker_service_manager_systemd(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_systemd, :stop, resource_name)
end
def restart_docker_service_manager_systemd(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_service_manager_systemd, :restart, resource_name)
end
##############
# docker_image
##############
ChefSpec.define_matcher :docker_image
def build_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :build, resource_name)
end
def build_if_missing_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :build_if_missing, resource_name)
end
def pull_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :pull, resource_name)
end
def pull_if_missing_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :pull_if_missing, resource_name)
end
def import_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :import, resource_name)
end
def push_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :push, resource_name)
end
def remove_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :remove, resource_name)
end
def save_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :save, resource_name)
end
def load_docker_image(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_image, :load, resource_name)
end
##################
# docker_container
##################
ChefSpec.define_matcher :docker_container
def create_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :create, resource_name)
end
def start_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :start, resource_name)
end
def stop_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :stop, resource_name)
end
def kill_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :kill, resource_name)
end
def run_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :run, resource_name)
end
def run_if_missing_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :run_if_missing, resource_name)
end
def pause_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :pause, resource_name)
end
def unpause_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :unpause, resource_name)
end
def restart_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :restart, resource_name)
end
def redeploy_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :redeploy, resource_name)
end
def delete_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :delete, resource_name)
end
def remove_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :remove, resource_name)
end
def commit_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :commit, resource_name)
end
def export_docker_container(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_container, :export, resource_name)
end
##############
# docker_tag
##############
ChefSpec.define_matcher :docker_tag
def tag_docker_tag(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_tag, :tag, resource_name)
end
#################
# docker_registry
#################
ChefSpec.define_matcher :docker_registry
def login_docker_registry(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_registry, :login, resource_name)
end
################
# docker_network
################
ChefSpec.define_matcher :docker_network
def create_docker_network(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_network, :create, resource_name)
end
def delete_docker_network(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_network, :delete, resource_name)
end
def connect_docker_network(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_network, :connect, resource_name)
end
def disconnect_docker_network(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_network, :disconnect, resource_name)
end
###############
# docker_volume
###############
ChefSpec.define_matcher :docker_volume
def create_docker_volume(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_volume, :create, resource_name)
end
def remove_docker_volume(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_volume, :remove, resource_name)
end
###############
# docker_exec
###############
ChefSpec.define_matcher :docker_exec
def run_docker_exec(resource_name)
ChefSpec::Matchers::ResourceMatcher.new(:docker_exec, :run, resource_name)
end
end

View File

@ -1 +0,0 @@
{"name":"docker","version":"2.15.2","description":"Provides docker_service, docker_image, and docker_container resources","long_description":"","maintainer":"Cookbook Engineering Team","maintainer_email":"cookbooks@chef.io","license":"Apache 2.0","platforms":{"amazon":">= 0.0.0","centos":">= 0.0.0","scientific":">= 0.0.0","oracle":">= 0.0.0","debian":">= 0.0.0","fedora":">= 0.0.0","redhat":">= 0.0.0","ubuntu":">= 0.0.0"},"dependencies":{},"recommendations":{},"suggestions":{},"conflicting":{},"providing":{},"replacing":{},"attributes":{},"groupings":{},"recipes":{}}

View File

@ -0,0 +1,21 @@
name 'docker'
maintainer 'Chef Software, Inc.'
maintainer_email 'cookbooks@chef.io'
license 'Apache-2.0'
description 'Provides docker_service, docker_image, and docker_container resources'
version '4.9.2'
source_url 'https://github.com/chef-cookbooks/docker'
issues_url 'https://github.com/chef-cookbooks/docker/issues'
supports 'amazon'
supports 'centos'
supports 'scientific'
supports 'oracle'
supports 'debian'
supports 'fedora'
supports 'redhat'
supports 'ubuntu'
chef_version '>= 12.15'
gem 'docker-api', '~> 1.34.0'

View File

@ -0,0 +1,925 @@
require 'spec_helper'
describe 'docker_test::container' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command("[ ! -z `docker ps -qaf 'name=busybox_ls$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=bill$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=hammer_time$'` ]").and_return(false)
stub_command('docker ps -a | grep red_light | grep Exited').and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=red_light$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=green_light$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=quitter$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=restarter$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=uber_options$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=kill_after$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=change_network_mode$'` ]").and_return(false)
stub_command('docker images | grep cmd_change').and_return(false)
stub_command('docker ps -a | grep cmd_change$').and_return(false)
end
context 'testing create action' do
it 'create docker_container[hello-world]' do
expect(chef_run).to create_docker_container('hello-world').with(
api_retries: 3,
read_timeout: 60,
container_name: 'hello-world',
repo: 'hello-world',
tag: 'latest',
command: ['/hello'],
cgroup_parent: '',
cpu_shares: 0,
cpuset_cpus: '',
detach: true,
domain_name: '',
log_driver: 'json-file',
memory: 0,
memory_swap: 0,
network_disabled: false,
outfile: nil,
restart_maximum_retry_count: 0,
restart_policy: nil,
security_opt: nil,
signal: 'SIGTERM',
user: ''
)
end
end
context 'testing run action' do
it 'run docker_container[hello-world]' do
expect(chef_run).to run_docker_container('busybox_ls').with(
repo: 'busybox',
command: ['ls', '-la', '/']
)
end
it 'run_if_missing docker_container[alpine_ls]' do
expect(chef_run).to run_if_missing_docker_container('alpine_ls').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la', '/']
)
end
end
context 'testing ports property' do
it 'run docker_container[an_echo_server]' do
expect(chef_run).to run_docker_container('an_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '7', '-e', '/bin/cat'],
port: '7:7'
)
end
it 'run docker_container[another_echo_server]' do
expect(chef_run).to run_docker_container('another_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '7', '-e', '/bin/cat'],
port: '7'
)
end
it 'run docker_container[an_udp_echo_server]' do
expect(chef_run).to run_docker_container('an_udp_echo_server').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ul', '-p', '7', '-e', '/bin/cat'],
port: '5007:7/udp'
)
end
it 'run docker_container[multi_ip_port]' do
expect(chef_run).to run_docker_container('multi_ip_port').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ul', '-p', '7', '-e', '/bin/cat'],
port: ['8301', '8301:8301/udp', '127.0.0.1:8500:8500', '127.0.1.1:8500:8500']
)
end
it 'run docker_container[port_range]' do
expect(chef_run).to run_docker_container('port_range').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: ['2000-2001', '2000-2001/udp', '3000-3001/tcp', '7000-7002:8000-8002']
)
end
end
context 'testing action :kill' do
it 'run execute[bill]' do
expect(chef_run).to run_execute('bill').with(
command: 'docker run --name bill -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'kill docker_container[bill]' do
expect(chef_run).to kill_docker_container('bill')
end
end
context 'testing action :stop' do
it 'run execute[hammer_time]' do
expect(chef_run).to run_execute('hammer_time').with(
command: 'docker run --name hammer_time -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'stop docker_container[hammer_time]' do
expect(chef_run).to stop_docker_container('hammer_time')
end
end
context 'testing action :pause' do
it 'run execute[rm stale red_light]' do
expect(chef_run).to run_execute('rm stale red_light').with(
command: 'docker rm -f red_light'
)
end
it 'run execute[red_light]' do
expect(chef_run).to run_execute('red_light').with(
command: 'docker run --name red_light -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'pause docker_container[red_light]' do
expect(chef_run).to pause_docker_container('red_light')
end
end
context 'testing action :unpause' do
it 'run bash[green_light]' do
expect(chef_run).to run_bash('green_light')
end
it 'unpause docker_container[green_light]' do
expect(chef_run).to unpause_docker_container('green_light')
end
end
context 'testing action :restart' do
it 'run bash[quitter]' do
expect(chef_run).to run_bash('quitter')
end
it 'restart docker_container[quitter]' do
expect(chef_run).to restart_docker_container('quitter')
end
it 'create file[/marker_container_quitter_restarter]' do
expect(chef_run).to create_file('/marker_container_quitter_restarter')
end
it 'run execute[restarter]' do
expect(chef_run).to run_execute('restarter').with(
command: 'docker run --name restarter -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'restart docker_container[restarter]' do
expect(chef_run).to restart_docker_container('restarter')
end
it 'create file[/marker_container_restarter]' do
expect(chef_run).to create_file('/marker_container_restarter')
end
end
context 'testing action :delete' do
it 'run execute[deleteme]' do
expect(chef_run).to run_execute('deleteme').with(
command: 'docker run --name deleteme -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'create file[/marker_container_deleteme' do
expect(chef_run).to create_file('/marker_container_deleteme')
end
it 'delete docker_container[deleteme]' do
expect(chef_run).to delete_docker_container('deleteme')
end
end
context 'testing action :redeploy' do
it 'runs docker_container[redeployer]' do
expect(chef_run).to run_docker_container('redeployer').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '7'
)
end
it 'creates docker_container[unstarted_redeployer]' do
expect(chef_run).to create_docker_container('unstarted_redeployer').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '7'
)
end
it 'runs execute[redeploy redeployers]' do
expect(chef_run).to run_execute('redeploy redeployers')
end
end
context 'testing bind_mounter' do
it 'creates directory[/hostbits]' do
expect(chef_run).to create_directory('/hostbits').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/hostbits/hello.txt]' do
expect(chef_run).to create_file('/hostbits/hello.txt').with(
content: 'hello there\n',
owner: 'root',
group: 'root',
mode: '0644'
)
end
it 'creates directory[/more-hostbits]' do
expect(chef_run).to create_directory('/more-hostbits').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/more-hostbits/hello.txt]' do
expect(chef_run).to create_file('/more-hostbits/hello.txt').with(
content: 'hello there\n',
owner: 'root',
group: 'root',
mode: '0644'
)
end
it 'run_if_missing docker_container[bind_mounter]' do
expect(chef_run).to run_if_missing_docker_container('bind_mounter').with(
repo: 'busybox',
command: ['ls', '-la', '/bits', '/more-bits'],
volumes_binds: ['/hostbits:/bits', '/more-hostbits:/more-bits', '/winter:/spring:ro'],
volumes: { '/snow' => {}, '/summer' => {} }
)
end
end
context 'testing binds_alias' do
it 'run_if_missing docker_container[binds_alias]' do
expect(chef_run).to run_if_missing_docker_container('binds_alias').with(
repo: 'busybox',
command: ['ls', '-la', '/bits', '/more-bits'],
volumes_binds: ['/fall:/sun', '/winter:/spring:ro'],
volumes: { '/snow' => {}, '/summer' => {} }
)
end
end
context 'testing volumes_from' do
it 'creates directory[/chefbuilder]' do
expect(chef_run).to create_directory('/chefbuilder').with(
owner: 'root',
group: 'root'
)
end
it 'runs execute[copy chef to chefbuilder]' do
expect(chef_run).to run_execute('copy chef to chefbuilder').with(
command: 'tar cf - /opt/chef | tar xf - -C /chefbuilder',
creates: '/chefbuilder/opt'
)
end
it 'creates file[/chefbuilder/Dockerfile]' do
expect(chef_run).to create_file('/chefbuilder/Dockerfile')
end
it 'build_if_missing docker_image[chef_container]' do
expect(chef_run).to build_if_missing_docker_image('chef_container').with(
tag: 'latest',
source: '/chefbuilder'
)
end
it 'create docker_container[chef_container]' do
expect(chef_run).to create_docker_container('chef_container').with(
command: ['true'],
volumes: { '/opt/chef' => {} }
)
end
it 'run_if_missing docker_container[ohai_debian]' do
expect(chef_run).to run_if_missing_docker_container('ohai_debian').with(
command: ['/opt/chef/embedded/bin/ohai', 'platform'],
repo: 'debian',
volumes_from: ['chef_container']
)
end
end
context 'testing env' do
it 'run_if_missing docker_container[env]' do
expect(chef_run).to run_if_missing_docker_container('env').with(
repo: 'debian',
env: ['PATH=/usr/bin', 'FOO=bar'],
command: ['env']
)
end
end
context 'testing entrypoint' do
it 'run_if_missing docker_container[ohai_again]' do
expect(chef_run).to run_if_missing_docker_container('ohai_again').with(
repo: 'debian',
volumes_from: ['chef_container'],
entrypoint: ['/opt/chef/embedded/bin/ohai']
)
end
it 'run_if_missing docker_container[ohai_again_debian]' do
expect(chef_run).to run_if_missing_docker_container('ohai_again_debian').with(
repo: 'debian',
volumes_from: ['chef_container'],
entrypoint: ['/opt/chef/embedded/bin/ohai'],
command: ['platform']
)
end
end
context 'testing Dockefile CMD directive' do
it 'creates directory[/cmd_test]' do
expect(chef_run).to create_directory('/cmd_test')
end
it 'creates file[/cmd_test/Dockerfile]' do
expect(chef_run).to create_file('/cmd_test/Dockerfile')
end
it 'build_if_missing docker_image[cmd_test]' do
expect(chef_run).to build_if_missing_docker_image('cmd_test').with(
tag: 'latest',
source: '/cmd_test'
)
end
it 'run_if_missing docker_container[cmd_test]' do
expect(chef_run).to run_if_missing_docker_container('cmd_test')
end
end
context 'testing autoremove' do
it 'runs docker_container[sean_was_here]' do
expect(chef_run).to run_docker_container('sean_was_here').with(
repo: 'debian',
volumes_from: ['chef_container'],
autoremove: true
)
end
it 'creates file[/marker_container_sean_was_here]' do
expect(chef_run).to create_file('/marker_container_sean_was_here')
end
end
context 'testing detach' do
it 'runs docker_container[attached]' do
expect(chef_run).to run_docker_container('attached').with(
repo: 'debian',
volumes_from: ['chef_container'],
detach: false
)
end
it 'creates file[/marker_container_attached]' do
expect(chef_run).to create_file('/marker_container_attached')
end
context 'with timeout' do
it 'runs docker_container[attached_with_timeout]' do
expect(chef_run).to run_docker_container('attached_with_timeout').with(
repo: 'debian',
volumes_from: ['chef_container'],
detach: false,
timeout: 10
)
end
it 'creates file[/marker_container_attached_with_timeout]' do
expect(chef_run).to create_file('/marker_container_attached_with_timeout')
end
end
end
context 'testing cap_add' do
it 'run_if_missing docker_container[cap_add_net_admin]' do
expect(chef_run).to run_if_missing_docker_container('cap_add_net_admin').with(
repo: 'debian',
command: ['bash', '-c', 'ip addr add 10.9.8.7/24 brd + dev eth0 label eth0:0 ; ip addr list'],
cap_add: ['NET_ADMIN']
)
end
it 'run_if_missing docker_container[cap_add_net_admin_error]' do
expect(chef_run).to run_if_missing_docker_container('cap_add_net_admin_error').with(
repo: 'debian',
command: ['bash', '-c', 'ip addr add 10.9.8.7/24 brd + dev eth0 label eth0:0 ; ip addr list']
)
end
end
context 'testing cap_drop' do
it 'run_if_missing docker_container[cap_drop_mknod]' do
expect(chef_run).to run_if_missing_docker_container('cap_drop_mknod').with(
repo: 'debian',
command: ['bash', '-c', 'mknod -m 444 /dev/urandom2 c 1 9 ; ls -la /dev/urandom2'],
cap_drop: ['MKNOD']
)
end
it 'run_if_missing docker_container[cap_drop_mknod_error]' do
expect(chef_run).to run_if_missing_docker_container('cap_drop_mknod_error').with(
repo: 'debian',
command: ['bash', '-c', 'mknod -m 444 /dev/urandom2 c 1 9 ; ls -la /dev/urandom2']
)
end
end
context 'testing hostname and domain_name' do
it 'run_if_missing docker_container[fqdn]' do
expect(chef_run).to run_if_missing_docker_container('fqdn').with(
repo: 'debian',
command: ['hostname', '-f'],
hostname: 'computers',
domain_name: 'biz'
)
end
end
context 'testing dns' do
it 'run_if_missing docker_container[dns]' do
expect(chef_run).to run_if_missing_docker_container('dns').with(
repo: 'debian',
command: ['cat', '/etc/resolv.conf'],
hostname: 'computers',
dns: ['4.3.2.1', '1.2.3.4'],
dns_search: ['computers.biz', 'chef.io']
)
end
end
context 'testing extra_hosts' do
it 'run_if_missing docker_container[extra_hosts]' do
expect(chef_run).to run_if_missing_docker_container('extra_hosts').with(
repo: 'debian',
command: ['cat', '/etc/hosts'],
extra_hosts: ['east:4.3.2.1', 'west:1.2.3.4']
)
end
end
context 'testing cpu_shares' do
it 'run_if_missing docker_container[cpu_shares]' do
expect(chef_run).to run_if_missing_docker_container('cpu_shares').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la'],
cpu_shares: 512
)
end
end
context 'testing cpuset_cpus' do
it 'run_if_missing docker_container[cpuset_cpus]' do
expect(chef_run).to run_if_missing_docker_container('cpuset_cpus').with(
repo: 'alpine',
tag: '3.1',
command: ['ls', '-la'],
cpuset_cpus: '0,1'
)
end
end
context 'testing restart_policy' do
it 'run_if_missing docker_container[try_try_again]' do
expect(chef_run).to run_if_missing_docker_container('try_try_again').with(
repo: 'alpine',
tag: '3.1',
command: ['grep', 'asdasdasd', '/etc/passwd'],
restart_policy: 'on-failure',
restart_maximum_retry_count: 2
)
end
it 'run_if_missing docker_container[reboot_survivor]' do
expect(chef_run).to run_if_missing_docker_container('reboot_survivor').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '123', '-e', '/bin/cat'],
port: '123',
restart_policy: 'always'
)
end
it 'run_if_missing docker_container[reboot_survivor_retry]' do
expect(chef_run).to run_if_missing_docker_container('reboot_survivor_retry').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '123', '-e', '/bin/cat'],
port: '123',
restart_policy: nil,
restart_maximum_retry_count: 2
)
end
end
context 'testing links' do
it 'runs docker_container[link_source]' do
expect(chef_run).to run_docker_container('link_source').with(
repo: 'alpine',
tag: '3.1',
env: ['FOO=bar', 'BIZ=baz'],
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '321'
)
end
it 'runs docker_container[link_source_2]' do
expect(chef_run).to run_docker_container('link_source_2').with(
repo: 'alpine',
tag: '3.1',
env: ['FOO=few', 'BIZ=buzz'],
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '322'
)
end
it 'run_if_missing docker_container[link_target_1]' do
expect(chef_run).to run_if_missing_docker_container('link_target_1').with(
repo: 'alpine',
tag: '3.1',
env: ['ASD=asd'],
command: ['ping', '-c', '1', 'hello'],
links: ['link_source:hello']
)
end
it 'run_if_missing docker_container[link_target_2]' do
expect(chef_run).to run_if_missing_docker_container('link_target_2').with(
repo: 'alpine',
tag: '3.1',
command: ['env'],
links: ['link_source:hello']
)
end
it 'run_if_missing docker_container[link_target_3]' do
expect(chef_run).to run_if_missing_docker_container('link_target_3').with(
repo: 'alpine',
tag: '3.1',
env: ['ASD=asd'],
command: ['ping', '-c', '1', 'hello_again'],
links: ['link_source:hello', 'link_source_2:hello_again']
)
end
it 'run_if_missing docker_container[link_target_4]' do
expect(chef_run).to run_if_missing_docker_container('link_target_4').with(
repo: 'alpine',
tag: '3.1',
command: ['env'],
links: ['link_source:hello', 'link_source_2:hello_again']
)
end
it 'runs execute[redeploy_link_source]' do
expect(chef_run).to run_execute('redeploy_link_source')
end
end
context 'testing link removal' do
it 'run_if_missing docker_container[another_link_source]' do
expect(chef_run).to run_if_missing_docker_container('another_link_source').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '456', '-e', '/bin/cat'],
port: '456'
)
end
it 'run_if_missing docker_container[another_link_target]' do
expect(chef_run).to run_if_missing_docker_container('another_link_target').with(
repo: 'alpine',
tag: '3.1',
command: ['ping', '-c', '1', 'hello'],
links: ['another_link_source:derp']
)
end
end
context 'testing volume removal' do
it 'creates directory[/dangler]' do
expect(chef_run).to create_directory('/dangler').with(
owner: 'root',
group: 'root',
mode: '0755'
)
end
it 'creates file[/dangler/Dockerfile]' do
expect(chef_run).to create_file('/dangler/Dockerfile')
end
it 'build_if_missing docker_image[dangler]' do
expect(chef_run).to build_if_missing_docker_image('dangler').with(
tag: 'latest',
source: '/dangler'
)
end
it 'creates docker_container[dangler]' do
expect(chef_run).to create_docker_container('dangler').with(
command: ['true']
)
end
it 'creates file[/marker_container_dangler]' do
expect(chef_run).to create_file('/marker_container_dangler')
end
it 'deletes docker_container[dangler_volume_remover]' do
expect(chef_run).to delete_docker_container('dangler_volume_remover').with(
container_name: 'dangler',
remove_volumes: true
)
end
end
context 'testing mutator' do
it 'tags docker_tag[mutator_from_busybox]' do
expect(chef_run).to tag_docker_tag('mutator_from_busybox').with(
target_repo: 'busybox',
target_tag: 'latest',
to_repo: 'someara/mutator',
to_tag: 'latest'
)
end
it 'run_if_missing docker_container[mutator]' do
expect(chef_run).to run_if_missing_docker_container('mutator').with(
repo: 'someara/mutator',
tag: 'latest',
command: ['sh', '-c', 'touch /mutator-`date +"%Y-%m-%d_%H-%M-%S"`'],
outfile: '/mutator.tar',
force: true
)
end
it 'runs execute[commit mutator]' do
expect(chef_run).to run_execute('commit mutator')
end
end
context 'testing network_mode' do
it 'runs docker_container[network_mode]' do
expect(chef_run).to run_docker_container('network_mode').with(
repo: 'alpine',
tag: '3.1',
command: ['nc', '-ll', '-p', '776', '-e', '/bin/cat'],
port: '776:776',
network_mode: 'host'
)
end
end
it 'runs execute[change_network_mode]' do
expect(chef_run).to run_execute('change_network_mode')
end
it 'runs docker_container[change_network_mode]' do
expect(chef_run).to run_docker_container('change_network_mode')
end
context 'testing ulimits' do
it 'runs docker_container[ulimits]' do
expect(chef_run).to run_docker_container('ulimits').with(
repo: 'alpine',
tag: '3.1',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
port: '778:778',
cap_add: ['SYS_RESOURCE'],
ulimits: [
'nofile=40960:40960',
'core=100000000:100000000',
'memlock=100000000:100000000',
]
)
end
end
context 'testing api_timeouts' do
it 'run_if_missing docker_container[api_timeouts]' do
expect(chef_run).to run_if_missing_docker_container('api_timeouts').with(
command: ['nc', '-ll', '-p', '779', '-e', '/bin/cat'],
repo: 'alpine',
tag: '3.1',
read_timeout: 60,
write_timeout: 60
)
end
end
context 'testing uber_options' do
it 'runs execute[uber_options]' do
expect(chef_run).to run_execute('uber_options').with(
command: 'docker run --name uber_options -d busybox sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
)
end
it 'runs docker_container[uber_options]' do
expect(chef_run).to run_docker_container('uber_options').with(
repo: 'alpine',
tag: '3.1',
hostname: 'www',
domainname: 'computers.biz',
env: ['FOO=foo', 'BAR=bar'],
mac_address: '00:00:DE:AD:BE:EF',
network_disabled: false,
tty: true,
volumes_binds: ['/hostbits:/bits', '/more-hostbits:/more-bits'],
volumes: { '/root' => {} },
working_dir: '/',
cap_add: %w(NET_ADMIN SYS_RESOURCE),
cap_drop: ['MKNOD'],
cpu_shares: 512,
cpuset_cpus: '0,1',
dns: ['8.8.8.8', '8.8.4.4'],
dns_search: ['computers.biz'],
extra_hosts: ['east:4.3.2.1', 'west:1.2.3.4'],
links: ['link_source:hello'],
port: '1234:1234',
volumes_from: ['chef_container'],
user: 'operator',
entrypoint: ['/bin/sh', '-c'],
command: ['trap exit 0 SIGTERM; while :; do sleep 5; done'],
ulimits: [
'nofile=40960:40960',
'core=100000000:100000000',
'memlock=100000000:100000000',
],
labels: { 'foo' => 'bar', 'hello' => 'world' }
)
end
end
context 'testing overrides' do
it 'creates directory[/overrides]' do
expect(chef_run).to create_directory('/overrides').with(
owner: 'root',
group: 'root'
)
end
it 'creates file[/overrides/Dockerfile]' do
expect(chef_run).to create_file('/overrides/Dockerfile')
end
it 'build_if_missing docker_image[overrides]' do
expect(chef_run).to build_if_missing_docker_image('overrides').with(
tag: 'latest',
source: '/overrides',
force: true
)
end
it 'run_if_missing docker_container[overrides-1]' do
expect(chef_run).to run_docker_container('overrides-1').with(
repo: 'overrides'
)
end
it 'run_if_missing docker_container[overrides-2]' do
expect(chef_run).to run_docker_container('overrides-2').with(
repo: 'overrides',
user: 'operator',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done'],
env: ['FOO=biz'],
volume: { '/var/log' => {} },
workdir: '/tmp'
)
end
end
context 'testing host overrides' do
it 'creates docker_container[host_override]' do
expect(chef_run).to create_docker_container('host_override').with(
repo: 'alpine',
host: 'tcp://127.0.0.1:2376',
command: ['ls', '-la', '/']
)
end
end
context 'testing logging drivers' do
it 'run_if_missing docker_container[syslogger]' do
expect(chef_run).to run_if_missing_docker_container('syslogger').with(
command: ['nc', '-ll', '-p', '780', '-e', '/bin/cat'],
repo: 'alpine',
tag: '3.1',
log_driver: 'syslog',
log_opts: { 'tag' => 'container-syslogger' }
)
end
end
context 'testing kill_after' do
it 'creates directory[/kill_after]' do
expect(chef_run).to create_directory('/kill_after').with(
owner: 'root',
group: 'root'
)
end
it 'creates file[/kill_after/loop.sh]' do
expect(chef_run).to create_file('/kill_after/loop.sh')
end
it 'creates file[/kill_after/Dockerfile]' do
expect(chef_run).to create_file('/kill_after/Dockerfile')
end
it 'build_if_missing docker_image[kill_after]' do
expect(chef_run).to build_if_missing_docker_image('kill_after').with(
tag: 'latest',
source: '/kill_after',
force: true
)
end
it 'run execute[kill_after]' do
expect(chef_run).to run_execute('kill_after').with(
command: 'docker run --name kill_after -d kill_after'
)
end
it 'stop docker_container[kill_after]' do
expect(chef_run).to stop_docker_container('kill_after')
end
it 'run_if_missing docker_container[pid_mode]' do
expect(chef_run).to run_if_missing_docker_container('pid_mode').with(
pid_mode: 'host'
)
end
it 'run_if_missing docker_container[ipc_mode]' do
expect(chef_run).to run_if_missing_docker_container('ipc_mode').with(
ipc_mode: 'host'
)
end
it 'run_if_missing docker_container[uts_mode]' do
expect(chef_run).to run_if_missing_docker_container('uts_mode').with(
uts_mode: 'host'
)
end
end
context 'testing ro_rootfs' do
it 'creates read-only rootfs' do
expect(chef_run).to run_if_missing_docker_container('ro_rootfs').with(
ro_rootfs: true
)
end
end
context 'testing health_check options' do
it 'sets health_check options' do
expect(chef_run).to run_docker_container('health_check').with(
repo: 'alpine',
tag: '3.1',
health_check: {
'Test' =>
[
'string',
],
'Interval' => 0,
'Timeout' => 0,
'Retries' => 0,
'StartPeriod' => 0,
}
)
end
end
end

View File

@ -0,0 +1,41 @@
require 'spec_helper'
describe 'docker_test::exec' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
it 'pull_if_missing docker_image[busybox]' do
expect(chef_run).to pull_if_missing_docker_image('busybox')
end
it 'run docker_container[busybox_exec]' do
expect(chef_run).to run_docker_container('busybox_exec').with(
repo: 'busybox',
command: ['sh', '-c', 'trap exit 0 SIGTERM; while :; do sleep 1; done']
)
end
context 'testing run action' do
it 'run docker_exec[touch_it]' do
expect(chef_run).to run_docker_exec('touch_it').with(
container: 'busybox_exec',
command: ['touch', '/tmp/onefile'],
timeout: 120
)
end
it 'creates file[/marker_busybox_exec_onefile]' do
expect(chef_run).to create_file('/marker_busybox_exec_onefile')
end
it 'run docker_exec[another]' do
expect(chef_run).to run_docker_exec('poke_it').with(
container: 'busybox_exec',
command: ['touch', '/tmp/twofile']
)
end
it 'creates file[/marker_busybox_exec_twofile]' do
expect(chef_run).to create_file('/marker_busybox_exec_twofile')
end
end
end

View File

@ -0,0 +1,24 @@
require 'spec_helper'
describe 'docker_test::image_prune' do
context 'it steps over the provider' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '18.04').converge(described_recipe) }
context 'testing default action, default properties' do
it 'prunes docker_image[hello-world]' do
expect(chef_run).to prune_docker_image_prune('hello-world').with(
dangling: true
)
end
it 'prunes docker_image[hello-world]' do
expect(chef_run).to prune_docker_image_prune('prune-old-images').with(
dangling: true,
prune_until: '1h30m',
with_label: 'com.example.vendor=ACME',
without_label: 'no_prune'
)
end
end
end
end

View File

@ -0,0 +1,271 @@
require 'spec_helper'
describe 'docker_test::image' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command('/usr/bin/test -f /tmp/registry/tls/ca-key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server-key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.csr').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/key.pem').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/client.csr').and_return(true)
stub_command('/usr/bin/test -f /tmp/registry/tls/cert.pem').and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=registry_service$'` ]").and_return(true)
stub_command("[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]").and_return(true)
stub_command('netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"').and_return(false)
end
context 'testing default action, default properties' do
it 'pulls docker_image[hello-world]' do
expect(chef_run).to pull_docker_image('hello-world').with(
api_retries: 3,
destination: nil,
force: false,
nocache: false,
noprune: false,
read_timeout: 120,
repo: 'hello-world',
rm: true,
source: nil,
tag: 'latest',
write_timeout: nil
)
end
end
context 'testing non-default name attribute containing a single quote' do
it "pulls docker_image[Tom's container]" do
expect(chef_run).to pull_docker_image("Tom's container").with(
repo: 'tduffield/testcontainerd'
)
end
end
context 'testing the :pull action' do
it 'pulls docker_image[busybox]' do
expect(chef_run).to pull_docker_image('busybox')
end
end
context 'testing using pull_if_missing' do
it 'pull_if_missing docker_image[debian]' do
expect(chef_run).to pull_if_missing_docker_image('debian')
end
end
context 'testing specifying a tag and read/write timeouts' do
it 'pulls docker_image[alpine]' do
expect(chef_run).to pull_docker_image('alpine').with(
tag: '3.1',
read_timeout: 60,
write_timeout: 60
)
end
end
context 'testing the host property' do
it 'pulls docker_image[alpine-localhost]' do
expect(chef_run).to pull_docker_image('alpine-localhost').with(
repo: 'alpine',
tag: '2.7',
host: 'tcp://127.0.0.1:2376'
)
end
end
context 'testing :remove action' do
it 'runs execute[pull vbatts/slackware]' do
expect(chef_run).to run_execute('pull vbatts/slackware').with(
command: 'docker pull vbatts/slackware ; touch /marker_image_slackware',
creates: '/marker_image_slackware'
)
end
it 'removes docker_image[vbatts/slackware]' do
expect(chef_run).to remove_docker_image('vbatts/slackware')
end
end
context 'testing :save action' do
it 'saves docker_image[save hello-world]' do
expect(chef_run).to save_docker_image('save hello-world').with(
repo: 'hello-world',
destination: '/hello-world.tar'
)
end
end
context 'testing :load action' do
it 'pulls docker_image[cirros]' do
expect(chef_run).to pull_docker_image('cirros')
end
it 'saves docker_image[save cirros]' do
expect(chef_run).to save_docker_image('save cirros').with(
destination: '/cirros.tar'
)
end
it 'removes docker_image[remove cirros]' do
expect(chef_run).to remove_docker_image('remove cirros').with(
repo: 'cirros'
)
end
it 'loads docker_image[load cirros]' do
expect(chef_run).to load_docker_image('load cirros').with(
source: '/cirros.tar'
)
end
it 'creates file[/marker_image_image-1]' do
expect(chef_run).to create_file('/marker_load_cirros-1')
end
end
context 'testing the :build action from Dockerfile' do
it 'creates directory[/usr/local/src/container1]' do
expect(chef_run).to create_directory('/usr/local/src/container1')
end
it 'creates cookbook_file[/usr/local/src/container1/Dockerfile]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/container1/Dockerfile').with(
source: 'Dockerfile_1'
)
end
it 'build docker_image[someara/image-1]' do
expect(chef_run).to build_docker_image('someara/image-1').with(
tag: 'v0.1.0',
source: '/usr/local/src/container1/Dockerfile',
force: true
)
end
it 'creates file[/marker_image_image-1]' do
expect(chef_run).to create_file('/marker_image_image-1')
end
end
context 'testing the :build action from directory' do
it 'creates directory[/usr/local/src/container2]' do
expect(chef_run).to create_directory('/usr/local/src/container2')
end
it 'creates file[/usr/local/src/container2/foo.txt]' do
expect(chef_run).to create_file('/usr/local/src/container2/foo.txt').with(
content: 'Dockerfile_2 contains ADD for this file'
)
end
it 'creates cookbook_file[/usr/local/src/container2/Dockerfile]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/container2/Dockerfile').with(
source: 'Dockerfile_2'
)
end
it 'build_if_missing docker_image[someara/image.2]' do
expect(chef_run).to build_if_missing_docker_image('someara/image.2').with(
tag: 'v0.1.0',
source: '/usr/local/src/container2'
)
end
end
context 'testing the :build action from a tarball' do
it 'creates cookbook_file[/usr/local/src/image_3.tar]' do
expect(chef_run).to create_cookbook_file('/usr/local/src/image_3.tar').with(
source: 'image_3.tar'
)
end
it 'build_if_missing docker_image[image_3]' do
expect(chef_run).to build_if_missing_docker_image('image_3').with(
tag: 'v0.1.0',
source: '/usr/local/src/image_3.tar'
)
end
end
context 'testing the :import action' do
it 'imports docker_image[hello-again]' do
expect(chef_run).to import_docker_image('hello-again').with(
tag: 'v0.1.0',
source: '/hello-world.tar'
)
end
end
context 'testing images with dots and dashes in the name' do
it 'pulls docker_image[someara/name-w-dashes]' do
expect(chef_run).to pull_docker_image('someara/name-w-dashes')
end
it 'pulls docker_image[someara/name.w.dots]' do
expect(chef_run).to pull_docker_image('someara/name.w.dots')
end
end
context 'when setting up a local registry' do
it 'includes the "docker_test::registry" recipe' do
expect(chef_run).to include_recipe('docker_test::registry')
end
end
context 'testing pushing to a private registry' do
it 'tags docker_tag[private repo tag for name-w-dashes:v1.0.1]' do
expect(chef_run).to tag_docker_tag('private repo tag for name-w-dashes:v1.0.1').with(
target_repo: 'hello-again',
target_tag: 'v0.1.0',
to_repo: 'localhost:5043/someara/name-w-dashes',
to_tag: 'latest'
)
end
it 'tags docker_tag[private repo tag for name.w.dots]' do
expect(chef_run).to tag_docker_tag('private repo tag for name.w.dots').with(
target_repo: 'busybox',
target_tag: 'latest',
to_repo: 'localhost:5043/someara/name.w.dots',
to_tag: 'latest'
)
end
it 'pushes docker_image[localhost:5043/someara/name-w-dashes]' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name-w-dashes')
end
it 'creates file[/marker_image_private_name-w-dashes]' do
expect(chef_run).to create_file('/marker_image_private_name-w-dashes')
end
it 'pushes docker_image[localhost:5043/someara/name.w.dots]' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name.w.dots')
end
it 'pushes docker_image[localhost:5043/someara/name.w.dots] with tag v0.1.0' do
expect(chef_run).to push_docker_image('localhost:5043/someara/name.w.dots').with(
tag: 'v0.1.0'
)
end
it 'login docker_registry[localhost:5043]' do
expect(chef_run).to login_docker_registry('localhost:5043').with(
username: 'testuser',
password: 'testpassword',
email: 'alice@computers.biz'
)
end
it 'creates file[/marker_image_private_name.w.dots]' do
expect(chef_run).to create_file('/marker_image_private_name.w.dots')
end
end
context 'testing pulling from public Dockerhub after being authenticated to a private one' do
it 'pulls docker_image[fedora]' do
expect(chef_run).to pull_docker_image('fedora')
end
end
end

View File

@ -0,0 +1,140 @@
require 'spec_helper'
describe 'docker_test::installation_package' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '18.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
context 'testing default action, default properties' do
it 'installs docker' do
expect(chef_run).to create_docker_installation_package('default').with(version: '18.06.0')
end
end
# Coverage of all recent docker versions
# To ensure test coverage and backwards compatibility
# With the frequent changes in package naming convention
# List generated from
# https://download.docker.com/linux/ubuntu/dists/#{distro}/stable/binary-amd64/Packages
context 'version strings for Ubuntu 18.04' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '18.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
# Bionic
{ docker_version: '18.03.1', expected: '18.03.1~ce~3-0~ubuntu' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~ubuntu' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~ubuntu' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~ubuntu-bionic' },
].each do |suite|
it 'generates the correct version string ubuntu bionic' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Ubuntu 16.04' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '16.04',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
{ docker_version: '17.03.0', expected: '17.03.0~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.1', expected: '17.03.1~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.2', expected: '17.03.2~ce-0~ubuntu-xenial' },
{ docker_version: '17.03.3', expected: '17.03.3~ce-0~ubuntu-xenial' },
{ docker_version: '17.06.0', expected: '17.06.0~ce-0~ubuntu' },
{ docker_version: '17.06.1', expected: '17.06.1~ce-0~ubuntu' },
{ docker_version: '17.09.0', expected: '17.09.0~ce-0~ubuntu' },
{ docker_version: '17.09.1', expected: '17.09.1~ce-0~ubuntu' },
{ docker_version: '17.12.0', expected: '17.12.0~ce-0~ubuntu' },
{ docker_version: '17.12.1', expected: '17.12.1~ce-0~ubuntu' },
{ docker_version: '18.03.0', expected: '18.03.0~ce-0~ubuntu' },
{ docker_version: '18.03.1', expected: '18.03.1~ce-0~ubuntu' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~ubuntu' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~ubuntu' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~ubuntu-xenial' },
].each do |suite|
it 'generates the correct version string ubuntu xenial' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Debian 9.5' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'debian',
version: '9.5',
step_into: ['docker_installation_package']).converge(described_recipe)
end
[
{ docker_version: '17.03.0', expected: '17.03.0~ce-0~debian-stretch' },
{ docker_version: '17.03.1', expected: '17.03.1~ce-0~debian-stretch' },
{ docker_version: '17.03.2', expected: '17.03.2~ce-0~debian-stretch' },
{ docker_version: '17.03.3', expected: '17.03.3~ce-0~debian-stretch' },
{ docker_version: '17.06.0', expected: '17.06.0~ce-0~debian' },
{ docker_version: '17.06.1', expected: '17.06.1~ce-0~debian' },
{ docker_version: '17.09.0', expected: '17.09.0~ce-0~debian' },
{ docker_version: '17.09.1', expected: '17.09.1~ce-0~debian' },
{ docker_version: '17.12.0', expected: '17.12.0~ce-0~debian' },
{ docker_version: '17.12.1', expected: '17.12.1~ce-0~debian' },
{ docker_version: '18.03.0', expected: '18.03.0~ce-0~debian' },
{ docker_version: '18.03.1', expected: '18.03.1~ce-0~debian' },
{ docker_version: '18.06.0', expected: '18.06.0~ce~3-0~debian' },
{ docker_version: '18.06.1', expected: '18.06.1~ce~3-0~debian' },
{ docker_version: '18.09.0', expected: '5:18.09.0~3-0~debian-stretch' },
].each do |suite|
it 'generates the correct version string debian stretch' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
context 'version strings for Centos 7' do
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'centos',
version: '7',
step_into: ['docker_installation_package']).converge(described_recipe)
end
# https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
[
{ docker_version: '17.03.0', expected: '17.03.0.ce-1.el7.centos' },
{ docker_version: '17.03.1', expected: '17.03.1.ce-1.el7.centos' },
{ docker_version: '17.03.2', expected: '17.03.2.ce-1.el7.centos' },
{ docker_version: '17.03.3', expected: '17.03.3.ce-1.el7' },
{ docker_version: '17.06.0', expected: '17.06.0.ce-1.el7.centos' },
{ docker_version: '17.06.1', expected: '17.06.1.ce-1.el7.centos' },
{ docker_version: '17.09.0', expected: '17.09.0.ce-1.el7.centos' },
{ docker_version: '17.09.1', expected: '17.09.1.ce-1.el7.centos' },
{ docker_version: '17.12.0', expected: '17.12.0.ce-1.el7.centos' },
{ docker_version: '17.12.1', expected: '17.12.1.ce-1.el7.centos' },
{ docker_version: '18.03.0', expected: '18.03.0.ce-1.el7.centos' },
{ docker_version: '18.03.1', expected: '18.03.1.ce-1.el7.centos' },
{ docker_version: '18.06.0', expected: '18.06.0.ce-3.el7' },
{ docker_version: '18.06.1', expected: '18.06.1.ce-3.el7' },
{ docker_version: '18.09.0', expected: '18.09.0-3.el7' },
].each do |suite|
it 'generates the correct version string centos 7' do
custom_resource = chef_run.docker_installation_package('default')
actual = custom_resource.version_string(suite[:docker_version])
expect(actual).to eq(suite[:expected])
end
end
end
end

View File

@ -0,0 +1,174 @@
require 'spec_helper'
describe 'docker_test::network' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
context 'creates a network with unicode name' do
it 'creates docker_network_seseme_straße' do
expect(chef_run).to create_docker_network('seseme_straße')
end
end
context 'creates a network with defaults' do
it 'creates docker_network_a' do
expect(chef_run).to create_docker_network('network_a')
end
it 'creates echo-base-network_a' do
expect(chef_run).to run_docker_container('echo-base-network_a')
end
it 'creates echo-station-network_a' do
expect(chef_run).to run_docker_container('echo-station-network_a')
end
end
context 'when testing network deletion' do
it 'creates network_b with the CLI' do
expect(chef_run).to run_execute('create network_b').with(
command: 'docker network create network_b'
)
end
it 'creates /marker_delete_network_b' do
expect(chef_run).to create_file('/marker_delete_network_b')
end
it 'deletes docker_network[network_b]' do
expect(chef_run).to delete_docker_network('network_b')
end
end
context 'creates a network with subnet and gateway' do
it 'creates docker_network_c' do
expect(chef_run).to create_docker_network('network_c').with(
subnet: '192.168.88.0/24',
gateway: '192.168.88.1'
)
end
it 'creates echo-base-network_c' do
expect(chef_run).to run_docker_container('echo-base-network_c')
end
it 'creates echo-station-network_c' do
expect(chef_run).to run_docker_container('echo-station-network_c')
end
end
context 'creates a network with aux_address' do
it 'creates docker_network_d' do
expect(chef_run).to create_docker_network('network_d').with(
subnet: '192.168.89.0/24',
gateway: '192.168.89.1',
aux_address: ['a=192.168.89.2', 'b=192.168.89.3']
)
end
it 'creates echo-base-network_d' do
expect(chef_run).to run_docker_container('echo-base-network_d')
end
it 'creates echo-station-network_d' do
expect(chef_run).to run_docker_container('echo-station-network_d')
end
end
context 'creates a network with overlay driver' do
it 'creates network_e' do
expect(chef_run).to create_docker_network('network_e').with(
driver: 'overlay'
)
end
end
context 'creates a network with an ip-range' do
it 'creates docker_network_f' do
expect(chef_run).to create_docker_network('network_f').with(
driver: 'bridge',
subnet: '172.28.0.0/16',
gateway: '172.28.5.254',
ip_range: '172.28.5.0/24'
)
end
it 'creates echo-base-network_f' do
expect(chef_run).to run_docker_container('echo-base-network_f')
end
it 'creates echo-station-network_f' do
expect(chef_run).to run_docker_container('echo-station-network_f')
end
end
context 'create an overlay network with multiple subnets' do
it 'creates docker_network_g' do
expect(chef_run).to create_docker_network('network_g').with(
driver: 'overlay',
subnet: ['192.168.0.0/16', '192.170.0.0/16'],
gateway: ['192.168.0.100', '192.170.0.100'],
ip_range: '192.168.1.0/24',
aux_address: ['a=192.168.1.5', 'b=192.168.1.6', 'a=192.170.1.5', 'b=192.170.1.6']
)
end
it 'creates echo-base-network_g' do
expect(chef_run).to run_docker_container('echo-base-network_g')
end
it 'creates echo-station-network_g' do
expect(chef_run).to run_docker_container('echo-station-network_g')
end
end
context 'connect and disconnect a container' do
it 'creates docker_network_h1' do
expect(chef_run).to create_docker_network('network_h1')
end
it 'creates docker_network_h2' do
expect(chef_run).to create_docker_network('network_h2')
end
it 'creates container1-network_h' do
expect(chef_run).to run_docker_container('container1-network_h')
end
it 'creates /marker/network_h' do
expect(chef_run).to create_file('/marker_network_h')
end
it 'connects container1-network_h with network_h2' do
expect(chef_run).to connect_docker_network('network_h2 connector').with(
container: 'container1-network_h'
)
end
it 'disconnects container1-network_h from network_h1' do
expect(chef_run).to disconnect_docker_network('network_h1 disconnector').with(
container: 'container1-network_h'
)
end
end
context 'ipv6 network' do
it 'creates docker_network_ipv6' do
expect(chef_run).to create_docker_network('network_ipv6').with(
enable_ipv6: true,
subnet: 'fd00:dead:beef::/48'
)
end
it 'creates docker_network_ipv4' do
expect(chef_run).to create_docker_network('network_ipv4')
end
end
context 'internal network' do
it 'creates docker_network_internal' do
expect(chef_run).to create_docker_network('network_internal').with(
internal: true
)
end
end
end

View File

@ -0,0 +1,118 @@
require 'spec_helper'
describe 'docker_test::plugin' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
let(:sshfs_caps) do
[
{
'Name' => 'network',
'Value' => ['host'],
},
{
'Name' => 'mount',
'Value' => ['/var/lib/docker/plugins/'],
},
{
'Name' => 'mount',
'Value' => [''],
},
{
'Name' => 'device',
'Value' => ['/dev/fuse'],
},
{
'Name' => 'capabilities',
'Value' => ['CAP_SYS_ADMIN'],
},
]
end
context 'testing default action, default properties, but with privilege grant' do
it 'installs vieux/sshfs' do
expect(chef_run).to install_docker_plugin('vieux/sshfs').with(
api_retries: 3,
grant_privileges: sshfs_caps,
options: {},
remote_tag: 'latest'
)
end
end
context 'reconfigure existing plugin' do
it 'enables debug on vieux/sshfs' do
expect(chef_run).to update_docker_plugin('configure vieux/sshfs').with(
api_retries: 3,
grant_privileges: [],
options: {
'DEBUG' => '1',
},
local_alias: 'vieux/sshfs',
remote_tag: 'latest'
)
end
end
context 'testing the remove action' do
it 'removes vieux/sshfs' do
expect(chef_run).to remove_docker_plugin('remove vieux/sshfs').with(
api_retries: 3,
grant_privileges: [],
options: {},
local_alias: 'vieux/sshfs',
remote_tag: 'latest'
)
end
end
context 'testing configure and install at the same time' do
it 'installs wetopi/rbd' do
expect(chef_run).to install_docker_plugin('rbd').with(
remote: 'wetopi/rbd',
remote_tag: '1.0.1',
grant_privileges: true,
options: {
'LOG_LEVEL' => '4',
}
)
end
it 'removes wetopi/rbd again' do
expect(chef_run).to remove_docker_plugin('remove rbd').with(
local_alias: 'rbd'
)
end
end
context 'install is idempotent' do
it 'installs vieux/sshfs two times' do
expect(chef_run).to install_docker_plugin('sshfs 2.1').with(
remote: 'vieux/sshfs',
remote_tag: 'latest',
local_alias: 'sshfs',
grant_privileges: true
)
expect(chef_run).to install_docker_plugin('sshfs 2.2').with(
remote: 'vieux/sshfs',
remote_tag: 'latest',
local_alias: 'sshfs',
grant_privileges: true
)
end
end
context 'test :enable / :disable action' do
it 'enables sshfs' do
expect(chef_run).to enable_docker_plugin('enable sshfs').with(
local_alias: 'sshfs'
)
end
it 'disables sshfs' do
expect(chef_run).to disable_docker_plugin('disable sshfs').with(
local_alias: 'sshfs'
)
end
end
end

View File

@ -0,0 +1,125 @@
require 'spec_helper'
describe 'docker_test::registry' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
before do
stub_command('/usr/bin/test -f /tmp/registry/tls/ca.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/ca-key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/cert.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server-key.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.pem').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/client.csr').and_return(false)
stub_command('/usr/bin/test -f /tmp/registry/tls/server.csr').and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=registry_service$'` ]").and_return(false)
stub_command("[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]").and_return(false)
stub_command('netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"').and_return(false)
end
context 'when compiling the recipe' do
it 'creates directory[/tmp/registry/tls]' do
expect(chef_run).to create_directory('/tmp/registry/tls').with(
recursive: true
)
end
it 'runs bash[creating private key for docker server]' do
expect(chef_run).to run_bash('creating private key for docker server')
end
it 'runs bash[generating CA private and public key]' do
expect(chef_run).to run_bash('generating CA private and public key')
end
it 'runs bash[generating certificate request for server]' do
expect(chef_run).to run_bash('generating certificate request for server')
end
it 'creates file[/tmp/registry/tls/server-extfile.cnf]' do
expect(chef_run).to create_file('/tmp/registry/tls/server-extfile.cnf')
end
it 'runs bash[signing request for server]' do
expect(chef_run).to run_bash('signing request for server')
end
it 'runs bash[creating private key for docker client]' do
expect(chef_run).to run_bash('creating private key for docker client')
end
it 'runs bash[generating certificate request for client]' do
expect(chef_run).to run_bash('generating certificate request for client')
end
it 'creates file[/tmp/registry/tls/client-extfile.cnf]' do
expect(chef_run).to create_file('/tmp/registry/tls/client-extfile.cnf')
end
it 'runs bash[signing request for client]' do
expect(chef_run).to run_bash('signing request for client')
end
it 'pulls docker_image[nginx]' do
expect(chef_run).to pull_docker_image('nginx').with(
tag: '1.9'
)
end
it 'pulls docker_image[registry]' do
expect(chef_run).to pull_docker_image('registry').with(
tag: '2.6.1'
)
end
it 'creates directory[/tmp/registry/auth]' do
expect(chef_run).to create_directory('/tmp/registry/auth').with(
recursive: true,
owner: 'root',
mode: '0755'
)
end
it 'creates template[/tmp/registry/auth/registry.conf]' do
expect(chef_run).to create_template('/tmp/registry/auth/registry.conf').with(
source: 'registry/auth/registry.conf.erb',
owner: 'root',
mode: '0755'
)
end
it 'runs execute[copy server cert for registry]' do
expect(chef_run).to run_execute('copy server cert for registry').with(
command: 'cp /tmp/registry/tls/server.pem /tmp/registry/auth/server.crt',
creates: '/tmp/registry/auth/server.crt'
)
end
it 'runs execute[copy server key for registry]' do
expect(chef_run).to run_execute('copy server key for registry').with(
command: 'cp /tmp/registry/tls/server-key.pem /tmp/registry/auth/server.key',
creates: '/tmp/registry/auth/server.key'
)
end
it 'creates template[/tmp/registry/auth/registry.password]' do
expect(chef_run).to create_template('/tmp/registry/auth/registry.password').with(
source: 'registry/auth/registry.password.erb',
owner: 'root',
mode: '0755'
)
end
it 'runs bash[start docker registry]' do
expect(chef_run).to run_bash('start docker registry')
end
it 'runs bash[start docker registry proxy]' do
expect(chef_run).to run_bash('start docker registry proxy')
end
it 'runs bash[wait for docker registry and proxy]' do
expect(chef_run).to run_bash('wait for docker registry and proxy')
end
end
end

View File

@ -0,0 +1,55 @@
require 'spec_helper'
require_relative '../../libraries/helpers_service'
describe 'docker_test::service' do
before do
allow_any_instance_of(DockerCookbook::DockerHelpers::Service).to receive(:installed_docker_version).and_return('18.06.0')
end
cached(:chef_run) do
ChefSpec::SoloRunner.new(platform: 'ubuntu',
version: '16.04',
step_into: %w(helpers_service docker_service docker_service_base docker_service_manager docker_service_manager_systemd)).converge(described_recipe)
end
# If you have to change this file you most likely updated a default service option
# Please note that it will require a docker service restart
# Which is consumer impacting
expected = <<EOH
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service
Requires=docker.socket
Wants=network-online.target
[Service]
Type=notify
ExecStartPre=/sbin/sysctl -w net.ipv4.ip_forward=1
ExecStartPre=/sbin/sysctl -w net.ipv6.conf.all.forwarding=1
ExecStart=/usr/bin/dockerd --bip=10.10.10.0/24 --group=docker --default-address-pool=base=10.10.10.0/16,size=24 --pidfile=/var/run/docker.pid --storage-driver=overlay2
ExecStartPost=/usr/lib/docker/docker-wait-ready
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOH
it 'creates docker_service[default]' do
expect(chef_run).to render_file('/etc/systemd/system/docker.service').with_content { |content|
# For tests which run on windows - convert CRLF
expect(content.gsub(/[\r\n]+/m, "\n")).to match(expected.gsub(/[\r\n]+/m, "\n"))
}
end
end

View File

@ -0,0 +1,47 @@
require 'spec_helper'
describe 'docker_test::volume' do
cached(:chef_run) { ChefSpec::SoloRunner.new(platform: 'ubuntu', version: '16.04').converge(described_recipe) }
it 'pull_if_missing docker_image[alpine]' do
expect(chef_run).to pull_if_missing_docker_image('alpine').with(
tag: '3.1'
)
end
context 'testing remove action' do
it 'executes docker creates volume --name remove_me' do
expect(chef_run).to run_execute('docker volume create --name remove_me')
end
it 'creates file /marker_remove_me' do
expect(chef_run).to create_file('/marker_remove_me')
end
it 'removes docker_volume[remove_me]' do
expect(chef_run).to remove_docker_volume('remove_me')
end
end
context 'testing create action' do
it 'creates volume hello' do
expect(chef_run).to create_docker_volume('hello')
end
it 'creates volume hello again' do
expect(chef_run).to create_docker_volume('hello again').with(
volume_name: 'hello_again'
)
end
context 'testing create action' do
it 'runs file_writer' do
expect(chef_run).to run_if_missing_docker_container('file_writer')
end
it 'runs file_writer' do
expect(chef_run).to run_if_missing_docker_container('file_reader')
end
end
end
end

View File

@ -0,0 +1,82 @@
# require 'rspec'
# require 'rspec/its'
# require_relative '../libraries/helpers_container'
#
# class FakeContainerForTestingImageProperty
# include DockerCookbook::DockerHelpers::Container
#
# def initialize(attributes = {})
# @attributes = attributes
# end
#
# def repo(value = nil)
# attributes['repo'] = value if value
# attributes['repo']
# end
#
# def tag(value = nil)
# attributes['tag'] = value if value
# attributes['tag'] || 'latest'
# end
#
# private
#
# attr_reader :attributes
# end
#
# describe DockerCookbook::DockerHelpers::Container do
# let(:helper) { FakeContainerForTestingImageProperty.new }
#
# describe '#image' do
# subject { helper }
#
# context "If you say: repo 'blah'" do
# before { helper.repo 'blah' }
# its(:image) { is_expected.to eq('blah:latest') }
# end
#
# context "If you say: repo 'blah'; tag '3.1'" do
# before do
# helper.repo 'blah'
# helper.tag '3.1'
# end
# its(:image) { is_expected.to eq('blah:3.1') }
# end
#
# context "If you say: image 'blah'" do
# before { helper.image 'blah' }
# its(:repo) { is_expected.to eq('blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'blah:3.1'" do
# before { helper.image 'blah:3.1' }
# its(:repo) { is_expected.to eq('blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
#
# context "If you say: image 'repo/blah'" do
# before { helper.image 'repo/blah' }
# its(:repo) { is_expected.to eq('repo/blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'repo/blah:3.1'" do
# before { helper.image 'repo/blah:3.1' }
# its(:repo) { is_expected.to eq('repo/blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
#
# context "If you say: image 'repo:1337/blah'" do
# before { helper.image 'repo:1337/blah' }
# its(:repo) { is_expected.to eq('repo:1337/blah') }
# its(:tag) { is_expected.to eq('latest') }
# end
#
# context "If you say: image 'repo:1337/blah:3.1'" do
# before { helper.image 'repo:1337/blah:3.1' }
# its(:repo) { is_expected.to eq('repo:1337/blah') }
# its(:tag) { is_expected.to eq('3.1') }
# end
# end
# end

View File

@ -0,0 +1,49 @@
# require 'rspec'
# require_relative '../libraries/helpers_network'
#
# describe Class.new { include DockerCookbook::DockerHelpers::Network } do
# subject(:helper) { Class.new { include DockerCookbook::DockerHelpers::Network } }
# let(:subnets) do
# %w(
# 192.168.0.0/24
# )
# end
#
# let(:ip_ranges) do
# %w(
# 192.168.0.31/28
# )
# end
#
# let(:gateways) do
# %w(
# 192.168.0.34
# )
# end
#
# let(:aux_addresses) do
# %w(
# foo=192.168.0.34
# bar=192.168.0.124
# )
# end
#
# describe '#consolidate_ipam' do
# subject { described_class.new.consolidate_ipam(subnets, ip_ranges, gateways, aux_addresses) }
# it 'should have a subnet' do
# expect(subject).to include(include('Subnet' => '192.168.0.0/24'))
# end
#
# it 'should have aux address' do
# expect(subject).to include(include('AuxiliaryAddresses' => { 'foo' => '192.168.0.34', 'bar' => '192.168.0.124' }))
# end
#
# it 'should have gateways' do
# expect(subject).to include(include('Gateway' => '192.168.0.34'))
# end
#
# it 'should have ip range' do
# expect(subject).to include(include('IPRange' => '192.168.0.31/28'))
# end
# end
# end

View File

@ -0,0 +1,55 @@
require 'spec_helper'
require 'docker'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_container'
describe DockerCookbook::DockerContainer do
let(:resource) { DockerCookbook::DockerContainer.new('hello_world') }
it 'has a default action of [:run]' do
expect(resource.action).to eql([:run])
end
describe 'gets ip_address_from_container_networks' do
let(:options) { { 'id' => rand(10_000).to_s } }
subject do
Docker::Container.send(:new, Docker.connection, options)
end
# https://docs.docker.com/engine/api/version-history/#v121-api-changes
context 'when docker API < 1.21' do
let(:ip_address) { '10.0.0.1' }
let(:options) do
{
'id' => rand(10_000).to_s,
'IPAddress' => ip_address,
}
end
it 'gets ip_address as nil' do
actual = resource.ip_address_from_container_networks(subject)
expect { resource.ip_address_from_container_networks(subject) }.not_to raise_error
expect(actual).to eq(nil)
end
end
context 'when docker API > 1.21' do
let(:ip_address) { '10.0.0.1' }
let(:options) do
{
'id' => rand(10_000).to_s,
'NetworkSettings' => {
'Networks' => {
'bridge' => {
'IPAMConfig' => {
'IPv4Address' => ip_address,
},
},
},
},
}
end
it 'gets ip_address' do
actual = resource.ip_address_from_container_networks(subject)
expect(actual).to eq(ip_address)
end
end
end
end

View File

@ -0,0 +1,126 @@
require 'spec_helper'
require 'chef'
require 'excon'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_container'
describe 'docker_container' do
step_into :docker_container
platform 'ubuntu'
# Info returned by docker api
# https://docs.docker.com/engine/api/v1.39/#tag/Container
let(:container) do
{
'Id' => '123456789',
'IPAddress' => '10.0.0.1',
'Image' => 'ubuntu:bionic',
'Names' => ['/hello_world'],
'Config' => { 'Labels' => {} },
'HostConfig' => { 'RestartPolicy' => { 'Name' => 'unless-stopped',
'MaximumRetryCount' => 1 },
'Binds' => [],
'ReadonlyRootfs' => false },
'State' => 'not running',
'Warnings' => [],
}.to_json
end
# https://docs.docker.com/engine/api/v1.39/#tag/Image
let(:image) do
{ 'Id' => 'bf119e2',
'Repository' => 'ubuntu', 'Tag' => 'bionic',
'Created' => 1_364_102_658, 'Size' => 24_653,
'VirtualSize' => 180_116_135,
'Config' => { 'Labels' => {} } }.to_json
end
# https://docs.docker.com/engine/api/v1.39/#operation/SystemInfo
let(:info) do
{ 'Labels' => {} }.to_json
end
# https://docs.docker.com/engine/api/v1.39/#operation/ContainerCreate
let(:create) do
{
'Id' => 'e90e34656806',
'Warnings' => [],
}.to_json
end
before do
# Ensure docker api calls are mocked
# It is low level much easier to do in Excon
# Plus, the low level mock allows testing this cookbook
# for multiple docker apis and docker-api gems
# https://github.com/excon/excon#stubs
Excon.defaults[:mock] = true
Excon.stub({ method: :get, path: '/v1.16/containers/hello_world/json' }, body: container, status: 200)
Excon.stub({ method: :get, path: '/v1.16/images/ubuntu:bionic/json' }, body: image, status: 200)
Excon.stub({ method: :get, path: '/v1.16/info' }, body: info, status: 200)
Excon.stub({ method: :delete, path: '/v1.16/containers/123456789' }, body: '', status: 200)
Excon.stub({ method: :post, path: '/v1.16/containers/create' }, body: create, status: 200)
Excon.stub({ method: :get, path: '/v1.16/containers/123456789/start' }, body: '', status: 200)
end
context 'creates a docker container with default options' do
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemorySwappiness' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } } }
)
}
end
context 'creates a docker container with healthcheck options' do
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
health_check(
'Test' =>
[
'string',
],
'Interval' => 0,
'Timeout' => 0,
'Retries' => 0,
'StartPeriod' => 0
)
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemorySwappiness' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } }, 'Healthcheck' => { 'Test' => ['string'], 'Interval' => 0, 'Timeout' => 0, 'Retries' => 0, 'StartPeriod' => 0 } }
)
}
end
context 'creates a docker container with default options for windows' do
platform 'windows'
recipe do
docker_container 'hello_world' do
tag 'ubuntu:latest'
action :create
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to create_docker_container('hello_world').with(
tag: 'ubuntu:latest',
# Should be missing 'MemorySwappiness'
create_options: { 'name' => 'hello_world', 'Image' => 'hello_world:ubuntu:latest', 'Labels' => {}, 'Cmd' => nil, 'AttachStderr' => false, 'AttachStdin' => false, 'AttachStdout' => false, 'Domainname' => '', 'Entrypoint' => nil, 'Env' => [], 'ExposedPorts' => {}, 'Hostname' => nil, 'MacAddress' => nil, 'NetworkDisabled' => false, 'OpenStdin' => false, 'StdinOnce' => false, 'Tty' => false, 'User' => '', 'Volumes' => {}, 'WorkingDir' => '', 'HostConfig' => { 'Binds' => nil, 'CapAdd' => nil, 'CapDrop' => nil, 'CgroupParent' => '', 'CpuShares' => 0, 'CpusetCpus' => '', 'Devices' => [], 'Dns' => [], 'DnsSearch' => [], 'ExtraHosts' => nil, 'IpcMode' => '', 'Init' => nil, 'KernelMemory' => 0, 'Links' => nil, 'LogConfig' => nil, 'Memory' => 0, 'MemorySwap' => 0, 'MemoryReservation' => 0, 'NetworkMode' => 'bridge', 'OomKillDisable' => false, 'OomScoreAdj' => -500, 'Privileged' => false, 'PidMode' => '', 'PortBindings' => {}, 'PublishAllPorts' => false, 'RestartPolicy' => { 'Name' => nil, 'MaximumRetryCount' => 0 }, 'ReadonlyRootfs' => false, 'Runtime' => 'runc', 'SecurityOpt' => nil, 'Sysctls' => {}, 'Ulimits' => nil, 'UsernsMode' => '', 'UTSMode' => '', 'VolumesFrom' => nil, 'VolumeDriver' => nil }, 'NetworkingConfig' => { 'EndpointsConfig' => { 'bridge' => { 'IPAMConfig' => { 'IPv4Address' => nil }, 'Aliases' => [] } } } }
)
}
end
end

View File

@ -0,0 +1,27 @@
require 'spec_helper'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_image_prune'
describe DockerCookbook::DockerImagePrune do
let(:resource) { DockerCookbook::DockerImagePrune.new('rspec') }
it 'has a default action of [:prune]' do
expect(resource.action).to eql([:prune])
end
it 'generates filter json' do
# Arrange
expected = '{"filters":["dangling=true","until=1h30m","label=com.example.vendor=ACME","label!=no_prune"]}'
resource.dangling = true
resource.prune_until = '1h30m'
resource.with_label = 'com.example.vendor=ACME'
resource.without_label = 'no_prune'
resource.action :prune
# Act
actual = resource.generate_json(resource)
# Assert
expect(actual).to eq(expected)
end
end

View File

@ -0,0 +1,88 @@
require 'spec_helper'
require_relative '../../libraries/docker_base'
require_relative '../../libraries/docker_registry'
describe 'docker_registry' do
step_into :docker_registry
platform 'ubuntu'
# Info returned by docker api
# https://docs.docker.com/engine/api/v1.39/#section/Authentication
let(:auth) do
{
'identitytoken' => '9cbafc023786cd7...',
}.to_json
end
before do
# Ensure docker api calls are mocked
# It is low level much easier to do in Excon
# Plus, the low level mock allows testing this cookbook
# for multiple docker apis and docker-api gems
# https://github.com/excon/excon#stubs
Excon.defaults[:mock] = true
Excon.stub({ method: :post, path: '/v1.16/auth' }, body: auth, status: 200)
end
context 'logs into a docker registry with default options' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: nil
)
}
end
context 'logs into a docker registry with host' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
host 'chefspec_host'
end
end
it {
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: 'chefspec_host'
)
}
end
context 'logs into a docker registry with host environment variable' do
recipe do
docker_registry 'chefspec_custom_registry' do
email 'chefspec_email'
password 'chefspec_password'
username 'chefspec_username'
end
end
it {
# Set the environment variable
stub_const 'ENV', ENV.to_h.merge('DOCKER_HOST' => 'chefspec_host_environment_variable')
expect { chef_run }.to_not raise_error
expect(chef_run).to login_docker_registry('chefspec_custom_registry').with(
email: 'chefspec_email',
password: 'chefspec_password',
username: 'chefspec_username',
host: 'chefspec_host_environment_variable'
)
}
end
end

View File

@ -0,0 +1,21 @@
require 'chefspec'
require 'chefspec/berkshelf'
class RSpecHelper
class<<self
attr_accessor :current_example
end
def self.reset!
@current_example = nil
end
end
RSpec.configure do |config|
config.filter_run focus: true
config.run_all_when_everything_filtered = true
config.before :each do
RSpecHelper.reset!
RSpecHelper.current_example = self
end
end

View File

@ -1,11 +1,5 @@
# Docker Upstart and SysVinit configuration file
# Customize location of Docker binary (especially for development testing).
DOCKER="<%= @dockerd_bin_link %>"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="<%= @docker_daemon_opts %>"
# If you need Docker to use an HTTP proxy, it can also be specified here.
<% if @config.http_proxy %>
export http_proxy="<%= @config.http_proxy %>"

View File

@ -1,10 +1,4 @@
# /etc/sysconfig/docker
#
# Other arguments to pass to the docker daemon process
# These will be parsed by the sysv initscript and appended
# to the arguments list passed to docker
other_args="<%= @docker_daemon_opts %>"
# If you need Docker to use an HTTP proxy, it can also be specified here.
<% if @config.http_proxy %>

View File

@ -1,7 +1,13 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Documentation=https://docs.docker.com
<% if @docker_socket.nil? %>
After=network-online.target firewalld.service
<% else %>
After=network-online.target <%= @docker_name %>.socket firewalld.service
Requires=<%= @docker_name %>.socket
<% end %>
Wants=network-online.target
[Service]
Type=notify
@ -17,6 +23,9 @@ Environment="NO_PROXY=<%= @config.no_proxy %>"
<% if @config.tmpdir %>
Environment="TMPDIR=<%= @config.tmpdir %>"
<% end %>
<% @env_vars.each do |key, val| %>
Environment="<%= key %>=<%= val %>"
<% end unless @env_vars.nil? %>
<% if @config.ipv4_forward %>
ExecStartPre=/sbin/sysctl -w net.ipv4.ip_forward=1
<% end %>
@ -25,11 +34,20 @@ ExecStartPre=/sbin/sysctl -w net.ipv6.conf.all.forwarding=1
<% end %>
ExecStart=<%= @docker_daemon_cmd %>
ExecStartPost=<%= @docker_wait_ready %>
Restart=always
MountFlags=<%= @docker_mount_flags %>
ExecReload=/bin/kill -s HUP $MAINPID
<% if @config.mount_flags %>
MountFlags=<%= @config.mount_flags %>
<% end %>
LimitNOFILE=1048576
LimitNPROC=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
<%= @systemd_args %>
[Install]

View File

@ -1,18 +1,38 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target <%= @docker_name %>.socket
<% if @docker_socket.nil? %>
After=network-online.target firewalld.service
<% else %>
After=network-online.target <%= @docker_name %>.socket firewalld.service
Requires=<%= @docker_name %>.socket
<% end %>
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd://
MountFlags=<%= @docker_mount_flags %>
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=1048576
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target

View File

@ -4,9 +4,8 @@ PartOf=<%= @docker_name %>.service
[Socket]
ListenStream=<%= @docker_socket %>
SocketMode=0660
SocketUser=root
SocketGroup=<%= @config.group %>
<%= @systemd_socket_args %>
[Install]
WantedBy=sockets.target

View File

@ -6,7 +6,7 @@ PartOf=<%= @docker_name %>.service
ListenStream=<%= @docker_socket %>
SocketMode=0660
SocketUser=root
SocketGroup=docker
SocketGroup=<%= @config.group %>
[Install]
WantedBy=sockets.target

View File

@ -23,6 +23,7 @@ BASE=<%= @docker_name %>
# modify these in /etc/default/$BASE (/etc/default/docker)
DOCKER=<%= @dockerd_bin_link %>
DOCKER_OPTS="<%= @docker_daemon_arg %> <%= @docker_daemon_opts %>"
# This is the pid file managed by docker itself
DOCKER_PIDFILE=/var/run/$BASE.pid
# This is the pid file created/managed by start-stop-daemon
@ -106,10 +107,15 @@ case "$1" in
--pidfile "$DOCKER_SSD_PIDFILE" \
--make-pidfile \
-- \
<%= @docker_daemon_arg %> -p "$DOCKER_PID_FILE" \
$DOCKER_OPTS \
-p "$DOCKER_PIDFILE" \
>> "$DOCKER_LOGFILE" 2>&1
<%= @docker_wait_ready %>
<%= @docker_wait_ready %> > /dev/null 2>&1
if [ $? -ne 0 ]; then
log_failure_msg "<%= @docker_socket %> failed to start"
exit 1
fi
log_end_msg $?
;;

View File

@ -34,24 +34,18 @@ end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=<%= @dockerd_bin_link %>
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" <%= @docker_daemon_arg %> $DOCKER_OPTS --raw-logs
exec <%= @docker_daemon_cmd %> <%= @docker_raw_logs_arg %>
end script
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
<%= @docker_wait_ready %>
if [ $? -eq 0 ]; then
echo "<%= @docker_socket %> is up"
else
echo "<%= @docker_socket %> failed to come start"
echo "<%= @docker_socket %> failed to start"
exit 1
fi
end script

View File

@ -0,0 +1,35 @@
# CHANGELOG for docker_test
This file is used to list changes made in each version of docker_test.
## 0.5.1:
* Bugfix: Test docker_image :build for both file and directory source
## 0.5.0:
* Bugfix: Switch docker@0.25.0 deprecated dockerfile container LWRP attribute to source
## 0.4.0:
* Bugfix: Remove deprecated public_port in container_lwrp
* Bugfix: Add `init_type false` for busybox test containers
* Enhancement: Add tduffield/testcontainerd image, container, and tests
## 0.3.0:
* Enhancement: Change Dockerfile FROM to already downloaded busybox image instead of ubuntu
## 0.2.0:
* Added container_lwrp recipe
* Removed default recipe from image_lwrp recipe
## 0.1.0:
* Initial release of docker_test
- - -
Check the [Markdown Syntax Guide](http://daringfireball.net/projects/markdown/syntax) for help with Markdown.
The [Github Flavored Markdown page](http://github.github.com/github-flavored-markdown/) describes the differences between markdown on github and standard markdown.

View File

@ -0,0 +1,2 @@
FROM busybox
RUN /bin/echo 'hello from image_1'

View File

@ -0,0 +1,4 @@
FROM busybox
ADD foo.txt /tmp/foo.txt
RUN /bin/echo 'hello from image_2'
VOLUME /home

View File

@ -0,0 +1,32 @@
# Create a docker image that takes a long time to build
# Centos as a base image. Any should work for the for loop test, but
# CentOS is needed for the yum test.
# Note that pulling the base image will not trigger a
# timeout, regardless of how long it
# takes
FROM centos
# Simply wait for wait for 30 minutes, output a status update every 10 seconds
# This does not appear to trigger the timeout problem
# RUN [ "bash", "-c", "for minute in {1..30} ; do for second in {0..59..10} ; do echo -n \" $minute:$second \" ; sleep 10 ; done ; done" ]
# This triggers the timeout.
# Sleep for 5 minutes, 3 times.
# RUN [ "bash", "-c", "for minute in {0..10..5} ; do echo -n \" $minute \" ; sleep 300 ; done" ]
# Let's try this next.
# Sleep for 1 minutes, 15 time
RUN [ "bash", "-c", "for minute in {0..15} ; do echo -n \" $minute \" ; sleep 60 ; done" ]
# This should trigger the timeout unless you have a very fast Internet connection.
# RUN \
# curl -SL https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -o epel.rpm \
# && yum install -y epel.rpm \
# && rm epel.rpm \
# && yum install -y \
# zarafa \
# supervisor \
# && yum clean all \
# && rm -rf /usr/share/man /etc/httpd/conf.d/ssl.conf

View File

@ -0,0 +1,2 @@
FROM alpine:3.1
RUN /bin/echo 'hello from image_3'

View File

@ -0,0 +1,9 @@
name 'docker_test'
maintainer 'Sean OMeara'
maintainer_email 'sean@sean.io'
license 'Apache-2.0'
description 'installs a buncha junk'
version '0.6.0'
depends 'docker'
depends 'etcd'

View File

@ -0,0 +1,21 @@
################
# Docker service
################
docker_service 'default' do
host 'unix:///var/run/docker.sock'
install_method 'auto'
service_manager 'auto'
action [:create, :start]
end
docker_image 'alpine' do
action :pull
end
docker_container 'an_echo_server' do
repo 'alpine'
command 'nc -ll -p 7 -e /bin/cat'
port '7:7'
action :run
end

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,145 @@
################
# Setting up TLS
################
caname = 'docker_service_default'
caroot = "/ca/#{caname}"
directory caroot.to_s do
recursive true
action :create
end
# Self signed CA
bash "#{caname} - generating CA private and public key" do
cmd = 'openssl req'
cmd += ' -x509'
cmd += ' -nodes'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -subj '/CN=kitchen2docker/'"
cmd += ' -newkey rsa:4096'
cmd += " -keyout #{caroot}/ca-key.pem"
cmd += " -out #{caroot}/ca.pem"
cmd += ' 2>&1>/dev/null'
code cmd
not_if "/usr/bin/test -f #{caroot}/ca-key.pem"
not_if "/usr/bin/test -f #{caroot}/ca.pem"
action :run
end
# server certs
bash "#{caname} - creating private key for docker server" do
code "openssl genrsa -out #{caroot}/server-key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/server-key.pem"
action :run
end
bash "#{caname} - generating certificate request for server" do
cmd = 'openssl req'
cmd += ' -new'
cmd += ' -sha256'
cmd += " -subj '/CN=#{node['hostname']}/'"
cmd += " -key #{caroot}/server-key.pem"
cmd += " -out #{caroot}/server.csr"
code cmd
only_if "/usr/bin/test -f #{caroot}/server-key.pem"
not_if "/usr/bin/test -f #{caroot}/server.csr"
action :run
end
file "#{caroot}/server-extfile.cnf" do
content "subjectAltName = IP:#{node['ipaddress']},IP:127.0.0.1\n"
action :create
end
bash "#{caname} - signing request for server" do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/server.csr"
cmd += " -out #{caroot}/server.pem"
cmd += " -extfile #{caroot}/server-extfile.cnf"
not_if "/usr/bin/test -f #{caroot}/server.pem"
code cmd
action :run
end
# client certs
bash "#{caname} - creating private key for docker client" do
code "openssl genrsa -out #{caroot}/key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/key.pem"
action :run
end
bash "#{caname} - generating certificate request for client" do
cmd = 'openssl req'
cmd += ' -new'
cmd += " -subj '/CN=client/'"
cmd += " -key #{caroot}/key.pem"
cmd += " -out #{caroot}/client.csr"
code cmd
only_if "/usr/bin/test -f #{caroot}/key.pem"
not_if "/usr/bin/test -f #{caroot}/client.csr"
action :run
end
file "#{caroot}/client-extfile.cnf" do
content "extendedKeyUsage = clientAuth\n"
action :create
end
bash "#{caname} - signing request for client" do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/client.csr"
cmd += " -out #{caroot}/cert.pem"
cmd += " -extfile #{caroot}/client-extfile.cnf"
code cmd
not_if "/usr/bin/test -f #{caroot}/cert.pem"
action :run
end
################
# Etcd service
################
etcd_service 'etcd0' do
advertise_client_urls "http://#{node['ipaddress']}:2379,http://0.0.0.0:4001"
listen_client_urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
initial_advertise_peer_urls "http://#{node['ipaddress']}:2380"
listen_peer_urls 'http://0.0.0.0:2380'
initial_cluster_token 'etcd0'
initial_cluster "etcd0=http://#{node['ipaddress']}:2380"
initial_cluster_state 'new'
action [:create, :start]
end
################
# Docker service
################
docker_service 'default' do
host ['unix:///var/run/docker.sock', 'tcp://127.0.0.1:2376']
version node['docker']['version']
labels ['environment:test', 'foo:bar']
tls_verify true
tls_ca_cert "#{caroot}/ca.pem"
tls_server_cert "#{caroot}/server.pem"
tls_server_key "#{caroot}/server-key.pem"
tls_client_cert "#{caroot}/cert.pem"
tls_client_key "#{caroot}/key.pem"
cluster_store "etcd://#{node['ipaddress']}:4001"
cluster_advertise "#{node['ipaddress']}:4001"
install_method 'package'
action [:create, :start]
end

View File

@ -0,0 +1,25 @@
docker_image 'busybox' do
action :pull_if_missing
end
docker_container 'busybox_exec' do
repo 'busybox'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
end
docker_exec 'touch_it' do
container 'busybox_exec'
command ['touch', '/tmp/onefile']
timeout 120
not_if { ::File.exist?('/marker_busybox_exec_onefile') }
end
file '/marker_busybox_exec_onefile'
docker_exec 'poke_it' do
container 'busybox_exec'
cmd ['touch', '/tmp/twofile']
not_if { ::File.exist?('/marker_busybox_exec_twofile') }
end
file '/marker_busybox_exec_twofile'

View File

@ -0,0 +1,317 @@
# Two variables, one recipe.
caname = 'docker_service_default'
caroot = "/ca/#{caname}"
#########################
# :pull_if_missing, :pull
#########################
# default action, default properties
docker_image 'hello-world'
# non-default name attribute, containing a single quote
docker_image "Tom's container" do
repo 'tduffield/testcontainerd'
end
# :pull action specified
docker_image 'busybox' do
action :pull
end
# :pull_if_missing
docker_image 'debian' do
action :pull_if_missing
end
# specify a tag and read/write timeouts
docker_image 'alpine' do
tag '3.1'
read_timeout 60
write_timeout 60
end
# host override
docker_image 'alpine-localhost' do
repo 'alpine'
tag '2.7'
host 'tcp://127.0.0.1:2376'
tls_verify true
tls_ca_cert "#{caroot}/ca.pem"
tls_client_cert "#{caroot}/cert.pem"
tls_client_key "#{caroot}/key.pem"
end
#########
# :remove
#########
# install something so it can be used to test the :remove action
execute 'pull vbatts/slackware' do
command 'docker pull vbatts/slackware ; touch /marker_image_slackware'
creates '/marker_image_slackware'
action :run
end
docker_image 'vbatts/slackware' do
action :remove
end
########
# :save
########
docker_image 'save hello-world' do
repo 'hello-world'
destination '/hello-world.tar'
not_if { ::File.exist?('/hello-world.tar') }
action :save
end
########
# :load
########
docker_image 'cirros' do
action :pull
not_if { ::File.exist?('/marker_load_cirros-1') }
end
docker_image 'save cirros' do
repo 'cirros'
destination '/cirros.tar'
not_if { ::File.exist?('/cirros.tar') }
action :save
end
docker_image 'remove cirros' do
repo 'cirros'
not_if { ::File.exist?('/marker_load_cirros-1') }
action :remove
end
docker_image 'load cirros' do
source '/cirros.tar'
not_if { ::File.exist?('/marker_load_cirros-1') }
action :load
end
file '/marker_load_cirros-1' do
action :create
end
###########################
# :build
###########################
# Build from a Dockerfile
directory '/usr/local/src/container1' do
action :create
end
cookbook_file '/usr/local/src/container1/Dockerfile' do
source 'Dockerfile_1'
action :create
end
docker_image 'someara/image-1' do
tag 'v0.1.0'
source '/usr/local/src/container1/Dockerfile'
force true
not_if { ::File.exist?('/marker_image_image-1') }
action :build
end
file '/marker_image_image-1' do
action :create
end
# Build from a directory
directory '/usr/local/src/container2' do
action :create
end
file '/usr/local/src/container2/foo.txt' do
content 'Dockerfile_2 contains ADD for this file'
action :create
end
cookbook_file '/usr/local/src/container2/Dockerfile' do
source 'Dockerfile_2'
action :create
end
docker_image 'someara/image.2' do
tag 'v0.1.0'
source '/usr/local/src/container2'
action :build_if_missing
end
# Build from a tarball
cookbook_file '/usr/local/src/image_3.tar' do
source 'image_3.tar'
action :create
end
docker_image 'image_3' do
tag 'v0.1.0'
source '/usr/local/src/image_3.tar'
action :build_if_missing
end
#########
# :import
#########
docker_image 'hello-again' do
tag 'v0.1.0'
source '/hello-world.tar'
action :import
end
################
# :tag and :push
################
######################
# This commented out section was manually tested by replacing the
# authentication creds with real live Dockerhub creds.
#####################
# docker_registry 'https://index.docker.io/v1/' do
# username 'youthere'
# password 'p4sswh1rr3d'
# email 'youthere@computers.biz'
# end
# # name-w-dashes
# docker_tag 'public dockerhub someara/name-w-dashes:v1.0.1' do
# target_repo 'hello-again'
# target_tag 'v0.1.0'
# to_repo 'someara/name-w-dashes'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/name-w-dashes' do
# repo 'someara/name-w-dashes'
# not_if { ::File.exist?('/marker_image_public_name-w-dashes') }
# action :push
# end
# file '/marker_image_public_name-w-dashes' do
# action :create
# end
# # name.w.dots
# docker_tag 'public dockerhub someara/name.w.dots:latest' do
# target_repo 'busybox'
# target_tag 'latest'
# to_repo 'someara/name.w.dots'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/name.w.dots' do
# repo 'someara/name.w.dots'
# not_if { ::File.exist?('/marker_image_public_name.w.dots') }
# action :push
# end
# file '/marker_image_public_name.w.dots' do
# action :create
# end
# # private-repo-test
# docker_tag 'public dockerhub someara/private-repo-test:v1.0.1' do
# target_repo 'hello-world'
# target_tag 'latest'
# to_repo 'someara/private-repo-test'
# to_tag 'latest'
# action :tag
# end
# docker_image 'push someara/private-repo-test' do
# repo 'someara/private-repo-test'
# not_if { ::File.exist?('/marker_image_public_private-repo-test') }
# action :push
# end
# file '/marker_image_public_private-repo-test' do
# action :create
# end
# docker_image 'someara/private-repo-test'
# public images
docker_image 'someara/name-w-dashes'
docker_image 'someara/name.w.dots'
##################
# Private registry
##################
include_recipe 'docker_test::registry'
# for pushing to private repo
docker_tag 'private repo tag for name-w-dashes:v1.0.1' do
target_repo 'hello-again'
target_tag 'v0.1.0'
to_repo 'localhost:5043/someara/name-w-dashes'
to_tag 'latest'
action :tag
end
# for pushing to private repo
docker_tag 'private repo tag for name.w.dots' do
target_repo 'busybox'
target_tag 'latest'
to_repo 'localhost:5043/someara/name.w.dots'
to_tag 'latest'
action :tag
end
docker_tag 'private repo tag for name.w.dots v0.1.0' do
target_repo 'busybox'
target_tag 'latest'
to_repo 'localhost:5043/someara/name.w.dots'
to_tag 'v0.1.0'
action :tag
end
docker_registry 'localhost:5043' do
username 'testuser'
password 'testpassword'
email 'alice@computers.biz'
end
docker_image 'localhost:5043/someara/name-w-dashes' do
not_if { ::File.exist?('/marker_image_private_name-w-dashes') }
action :push
end
file '/marker_image_private_name-w-dashes' do
action :create
end
docker_image 'localhost:5043/someara/name.w.dots' do
not_if { ::File.exist?('/marker_image_private_name.w.dots') }
action :push
end
docker_image 'localhost:5043/someara/name.w.dots' do
not_if { ::File.exist?('/marker_image_private_name.w.dots') }
tag 'v0.1.0'
action :push
end
file '/marker_image_private_name.w.dots' do
action :create
end
# Pull from the public Dockerhub after being authenticated to a
# private one
docker_image 'fedora' do
action :pull
end

View File

@ -0,0 +1,15 @@
#########################
# :prune
#########################
docker_image_prune 'hello-world' do
dangling true
end
docker_image_prune 'prune-old-images' do
dangling true
prune_until '1h30m'
with_label 'com.example.vendor=ACME'
without_label 'no_prune'
action :prune
end

View File

@ -0,0 +1,4 @@
docker_installation_package 'default' do
version '18.06.0'
action :create
end

View File

@ -0,0 +1,4 @@
docker_installation_script 'default' do
repo node['docker']['repo']
action :create
end

View File

@ -0,0 +1,4 @@
docker_installation_tarball 'default' do
version node['docker']['version']
action :create
end

View File

@ -0,0 +1,251 @@
# pull alpine image
docker_image 'alpine' do
tag '3.1'
action :pull_if_missing
end
# unicode characters
docker_network 'seseme_straße' do
action :create
end
###########
# network_a
###########
# defaults
docker_network 'network_a' do
action :create
end
# docker run --net=
docker_container 'echo-base-network_a' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_a'
action :run
end
docker_container 'echo-station-network_a' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_a'
action :run
end
############
# network_b
############
execute 'create network_b' do
command 'docker network create network_b'
not_if { ::File.exist?('/marker_delete_network_b') }
end
file '/marker_delete_network_b' do
action :create
end
# Delete a network
docker_network 'network_b' do
action :delete
end
###########
# network_c
###########
# specify subnet and gateway
docker_network 'network_c' do
subnet '192.168.88.0/24'
gateway '192.168.88.1'
action :create
end
# docker run --net=
docker_container 'echo-base-network_c' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_c'
action :run
end
docker_container 'echo-station-network_c' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_c'
action :run
end
###########
# network_d
###########
# create a network with aux_address
docker_network 'network_d' do
subnet '192.168.89.0/24'
gateway '192.168.89.1'
aux_address ['a=192.168.89.2', 'b=192.168.89.3']
end
docker_container 'echo-base-network_d' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_d'
action :run
end
docker_container 'echo-station-network_d' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_d'
action :run
end
###########
# network_e
###########
# specify overlay driver
docker_network 'network_e' do
driver 'overlay'
action :create
end
###########
# network_f
###########
# create a network with an ip-range
docker_network 'network_f' do
driver 'bridge'
subnet '172.28.0.0/16'
gateway '172.28.5.254'
ip_range '172.28.5.0/24'
end
docker_container 'echo-base-network_f' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_f'
ip_address '172.28.5.5'
action :run
end
docker_container 'echo-station-network_f' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_f'
action :run
end
###########
# network_g
###########
# create an overlay network with multiple subnets
docker_network 'network_g' do
driver 'overlay'
subnet ['192.168.0.0/16', '192.170.0.0/16']
gateway ['192.168.0.100', '192.170.0.100']
ip_range '192.168.1.0/24'
aux_address ['a=192.168.1.5', 'b=192.168.1.6', 'a=192.170.1.5', 'b=192.170.1.6']
end
docker_container 'echo-base-network_g' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '1337'
network_mode 'network_g'
action :run
end
docker_container 'echo-station-network_g' do
repo 'alpine'
tag '3.1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
port '31337'
network_mode 'network_g'
action :run
end
###########
# network_h
###########
# connect same container to multiple networks
docker_network 'network_h1' do
action :create
end
docker_network 'network_h2' do
action :create
end
docker_container 'container1-network_h' do
repo 'alpine'
tag '3.1'
network_mode 'network_h1'
command 'sh -c "trap exit 0 SIGTERM; while :; do sleep 1; done"'
not_if { ::File.exist?('/marker_network_h') }
action :run
end
file '/marker_network_h' do
action :create
end
docker_network 'network_h2 connector' do
container 'container1-network_h'
network_name 'network_h2'
action :connect
end
# disconnet from a network
docker_network 'network_h1 disconnector' do
container 'container1-network_h'
network_name 'network_h1'
action :disconnect
end
##############
# network_ipv6
##############
# IPv6 enabled network
docker_network 'network_ipv6' do
enable_ipv6 true
subnet 'fd00:dead:beef::/48'
action :create
end
##############
# network_ipv4
##############
docker_network 'network_ipv4' do
action :create
end
##################
# network_internal
##################
docker_network 'network_internal' do
internal true
action :create
end

View File

@ -0,0 +1,94 @@
######################
# :install and :update
######################
sshfs_caps = [
{
'Name' => 'network',
'Value' => ['host'],
},
{
'Name' => 'mount',
'Value' => ['/var/lib/docker/plugins/'],
},
{
'Name' => 'mount',
'Value' => [''],
},
{
'Name' => 'device',
'Value' => ['/dev/fuse'],
},
{
'Name' => 'capabilities',
'Value' => ['CAP_SYS_ADMIN'],
},
]
docker_plugin 'vieux/sshfs' do
grant_privileges sshfs_caps
end
docker_plugin 'configure vieux/sshfs' do
action :update
local_alias 'vieux/sshfs'
options(
'DEBUG' => '1'
)
end
docker_plugin 'remove vieux/sshfs' do
local_alias 'vieux/sshfs'
action :remove
end
#######################
# :install with options
#######################
docker_plugin 'rbd' do
remote 'wetopi/rbd'
remote_tag '1.0.1'
grant_privileges true
options(
'LOG_LEVEL' => '4'
)
end
docker_plugin 'remove rbd' do
local_alias 'rbd'
action :remove
end
#######################################
# :install twice (should be idempotent)
#######################################
docker_plugin 'sshfs 2.1' do
local_alias 'sshfs'
remote 'vieux/sshfs'
remote_tag 'latest'
grant_privileges true
end
docker_plugin 'sshfs 2.2' do
local_alias 'sshfs'
remote 'vieux/sshfs'
remote_tag 'latest'
grant_privileges true
end
docker_plugin 'enable sshfs' do
local_alias 'sshfs'
action :enable
end
docker_plugin 'disable sshfs' do
local_alias 'sshfs'
action :disable
end
docker_plugin 'remove sshfs again' do
local_alias 'sshfs'
action :remove
end

View File

@ -0,0 +1,192 @@
# We're going to need some SSL certificates for testing.
caroot = '/tmp/registry/tls'
directory caroot.to_s do
recursive true
action :create
end
# Self signed CA
bash 'generating CA private and public key' do
cmd = 'openssl req'
cmd += ' -x509'
cmd += ' -nodes'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -subj '/CN=kitchen2docker/'"
cmd += ' -newkey rsa:4096'
cmd += " -keyout #{caroot}/ca-key.pem"
cmd += " -out #{caroot}/ca.pem"
cmd += ' 2>&1>/dev/null'
code cmd
not_if "/usr/bin/test -f #{caroot}/ca-key.pem"
not_if "/usr/bin/test -f #{caroot}/ca.pem"
action :run
end
# server certs
bash 'creating private key for docker server' do
code "openssl genrsa -out #{caroot}/server-key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/server-key.pem"
action :run
end
bash 'generating certificate request for server' do
cmd = 'openssl req'
cmd += ' -new'
cmd += ' -sha256'
cmd += " -subj '/CN=#{node['hostname']}/'"
cmd += " -key #{caroot}/server-key.pem"
cmd += " -out #{caroot}/server.csr"
code cmd
not_if "/usr/bin/test -f #{caroot}/server.csr"
action :run
end
file "#{caroot}/server-extfile.cnf" do
content "subjectAltName = IP:#{node['ipaddress']},IP:127.0.0.1\n"
action :create
end
bash 'signing request for server' do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/server.csr"
cmd += " -out #{caroot}/server.pem"
cmd += " -extfile #{caroot}/server-extfile.cnf"
not_if "/usr/bin/test -f #{caroot}/server.pem"
code cmd
action :run
end
# client certs
bash 'creating private key for docker client' do
code "openssl genrsa -out #{caroot}/key.pem 4096"
not_if "/usr/bin/test -f #{caroot}/key.pem"
action :run
end
bash 'generating certificate request for client' do
cmd = 'openssl req'
cmd += ' -new'
cmd += " -subj '/CN=client/'"
cmd += " -key #{caroot}/key.pem"
cmd += " -out #{caroot}/client.csr"
code cmd
not_if "/usr/bin/test -f #{caroot}/client.csr"
action :run
end
file "#{caroot}/client-extfile.cnf" do
content "extendedKeyUsage = clientAuth\n"
action :create
end
bash 'signing request for client' do
cmd = 'openssl x509'
cmd += ' -req'
cmd += ' -days 365'
cmd += ' -sha256'
cmd += " -CA #{caroot}/ca.pem"
cmd += " -CAkey #{caroot}/ca-key.pem"
cmd += ' -CAcreateserial'
cmd += " -in #{caroot}/client.csr"
cmd += " -out #{caroot}/cert.pem"
cmd += " -extfile #{caroot}/client-extfile.cnf"
code cmd
not_if "/usr/bin/test -f #{caroot}/cert.pem"
action :run
end
# Set up a test registry to test :push
# https://github.com/docker/distribution/blob/master/docs/authentication.md
#
docker_image 'nginx' do
tag '1.9'
end
docker_image 'registry' do
tag '2.6.1'
end
directory '/tmp/registry/auth' do
recursive true
owner 'root'
mode '0755'
action :create
end
template '/tmp/registry/auth/registry.conf' do
source 'registry/auth/registry.conf.erb'
owner 'root'
mode '0755'
action :create
end
# install certificates
execute 'copy server cert for registry' do
command "cp #{caroot}/server.pem /tmp/registry/auth/server.crt"
creates '/tmp/registry/auth/server.crt'
action :run
end
execute 'copy server key for registry' do
command "cp #{caroot}/server-key.pem /tmp/registry/auth/server.key"
creates '/tmp/registry/auth/server.key'
action :run
end
# testuser / testpassword
template '/tmp/registry/auth/registry.password' do
source 'registry/auth/registry.password.erb'
owner 'root'
mode '0755'
action :create
end
bash 'start docker registry' do
code <<-EOF
docker run \
-d \
-p 5000:5000 \
--name registry_service \
--restart=always \
registry:2
EOF
not_if "[ ! -z `docker ps -qaf 'name=registry_service$'` ]"
end
bash 'start docker registry proxy' do
code <<-EOF
docker run \
-d \
-p 5043:443 \
--name registry_proxy \
--restart=always \
-v /tmp/registry/auth/:/etc/nginx/conf.d \
nginx:1.9
EOF
not_if "[ ! -z `docker ps -qaf 'name=registry_proxy$'` ]"
end
bash 'wait for docker registry and proxy' do
code <<-EOF
i=0
tries=20
while true; do
((i++))
netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"
[ $? -eq 0 ] && break
[ $i -eq $tries ] && break
sleep 1
done
EOF
not_if 'netstat -plnt | grep ":5000" && netstat -plnt | grep ":5043"'
end

Some files were not shown because too many files have changed in this diff Show More