Q. Is it possible to feed packer with an ansible encrypted file?
I’m attempting to use packer from an ansible playbook in order to use an ansible-vault encrypted file in the build process.
Specifically, I have an “autounnattend.xml” answer file for automated windows. I’d like to encrypt this file with ansible-vault encrypt, and then use this file in a packer template, that looks like this:
source “vsphere-iso” “windows” {
# truncated for brevity
floppy_files = [
“[insert DEcrypted autounnattend.xml file here]”,
“./scripts/winrm.bat”,
“./scripts/Install-VMWareTools.ps1”,
“./drivers/”
]
}
Is this possible? Or is there a different way, I can use run-time decrypted files in my packer build?
Q. How can I chain Packer builds within a single template?
I would like to define a single Packer template that consists of 3 builds; each one building upon the previous:
source “amazon-ebs” “base” {
source_ami_filter {
filters {
…
}
}
build {
name = “build_1”
source “amazon-ebs.base” {
ami_name = “build-1-ami”
}
}
build {
name = “build_2”
source “amazon-ebs.base” {
ami_name = “build-2-ami
source_ami = build_1.output.ami_id
source_ami_filter { filters = {} }
}
}
build {
name = “build_3”
source “amazon-ebs.base” {
ami_name = “base-3-ami
source_ami = build_2.output.ami_id
source_ami_filter { filters = {} }
}
}
I know that it is possible to chain builders in some way (https://www.packer.io/guides/packer-on-cicd/pipelineing-builds#chaining-together-several-of-the-same-builders-to-make-save-points). However, that example is using Docker and doing it a little more indirectly.
Is it possible to refer directly to the AMI ID produced by a previous build stage? I know build_x.output.ami_id is incorrect but is there a syntax to allow it?
If so, I think I also need to override/unset the source_ami_filter from the source because the documentation says when source_ami and source_ami_filter are used together then source_ami has to meet the other criteria of the filter which won’t necessarily be the case.
Q. Testing Raspberry Pi image in a docker container with Packer and qemu
I would like to make a smoke test of a Raspberry Pi image in a docker container.
I am building the image using packer.io and the build-arm-image plugin in a gitlab pipeline inside a docker container. This packer plugin use qemu to run an existing arm image, execute command inside it and save the resulting image.
I have tried to reload the image generated in the same way (Packer and qemu), but when trying to check if the systemd services which I had enabled were available, I got an error that systemd could not run in docker containers (something about dbus not being available)
Is there another way to run this image in a docker container (to be run in my gitlab pipeline) and test if the services are running and if the website/api is available?
Q. Build Windows image for openstack with packer
I am trying to build Windows image for Openstack with packer, but i don’t know how to add the autounattend.xml file or floppy files.
I build the image with success from qemu but I cannot find the same for openstack.
For Qemu i have the floppy_files for virtio-win drives and the autounattend that is automate the installation. How i can parse the files to openstack for automation?
Q. How to run a CMD in a docker container that was created using Packer?
So I am creating a docker image with packer. The template defines the provisioners and builders etc. The builder section looks like this:
{
“builders”: [
{
“type”: “docker”,
“image”: “ubuntu:latest”,
“export_path”: “image.tar”,
“changes”: [
“USER test”,
“WORKDIR /tmp”,
“VOLUME /data”,
“EXPOSE 21 22”,
“CMD sudo myprogram”
]
}
]
}
When running packer against the template, the output is an image.tar file. I can then import it: docker import image.tar. And then I start like this docker run -p 21:21 -it –rm 52c0b0362362 bash.
I want whenever the image is started that automatically sudo myprogram is executed. However it does not seem to work, even tho the template validated successfully. I also tried instead of specifying CMD sudo myprogram to set it as entrypoint like so: ENTRYPOINT sudo myprogram. Neither worked. How can I make sure whenever my image is started, this command is automatically executed? It must be run as root/with sudo, that’s important.
Q. How to add Azure VM’s admin user to a specific unix group?
I’m creating an Azure VM image using Packer. The VM is just a Debian 10 installation with a couple of packages installed.
What I really need to do is to add VM admin user to a specific unix group (docker).
During the packer image provisioning I make changes to the /etc/adduser.conf to enable extra groups:
“provisioners”: [{
“execute_command”: “chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh ‘{{ .Path }}'”,
“inline”: [
“apt-get update”,
“apt-get upgrade -y”,
“apt-get -y install docker.io”,
“echo ‘ADD_EXTRA_GROUPS=1\nEXTRA_GROUPS=\”docker\”‘ >> /etc/adduser.conf”,
“/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync” ], “inline_shebang”: “/bin/sh -x”, “type”: “shell”
}]
But it only adds explicitly created users to the docker group but the the admin users created with the following command:
az vm create \
–resource-group myResourceGroup \
–name myVM \
–image myPackerImage \
–admin-username azureuser \
–generate-ssh-keys
Is there any way to add admin user to the group docker too?
Q. How to download a VM image from GCP?
I do not see a download button. I would like to download a VM image that was created on GCP using Packer and I would like to run it locally in Virtualbox.
Q. Is it possible to locate the temp keypair generated by Packer?
I`m creating a new image and everything is working. I would like to debug via ssh during the instance creation. A temp keypair is created and attached to the temp instance. My question is, can I get this keypair somewhere in order to debug it?
==> amazon-ebs: Prevalidating AMI Name…
amazon-ebs: Found Image ID: ami-0866798422f5d546b
==> amazon-ebs: Creating temporary keypair: packer_5cc6c77d-494a-f185-b5b3-f9b59e62fd4e
Q. DevOps Newbie – How to automate windows infrastructure deployment?
DevOps newbie here, so bear with me. I have a project where infrastructure deployment and application deployment is done using powershell and it is tied to VMWare vCenter.
For every client deployment, a deployment VM is created manually on their vCenter >> everything (application binaries and automation scripts) is copied manually to this VM >> DevOps run powershell script on this VM to create other VMs, configure them, and deploy application binaries.
Q. Packer and compressed ISO images
I am not using packer yet, just looking through the documentation. Some VMs it supports can be built from iso images. The examples cover use cases where the ISO is available online through the iso_url key in the JSON description file. But, can Packer handle the cases where the iso file is compressed (e.g. https://example.com/images/image.iso.xz)?
Q. Is there an idiomatic way to create reusable Packer templates?
I’m creating about 10-12 Packer templates which almost all work the same way. Same builder (Amazon EBS), with some small variations in AMI names, and almost the same provisioner (Ansible Remote) – sometimes with additional variables and sometimes multiple playbooks.
But, much of the Packer templates remain the same.
I know I can use variables to get some of the changeable values into one place in the file. But I’m keen to cut down on all of the copy-pasta in the templates.
I could do this with a wrapper shell script utilising jq or something. But each complexity I add gives something else for others to learn. Try as I might, I can’t seem to find a ‘blessed’ idiomatic way of doing this with Packer.
Is there an idiomatic way of doing this – or at least a generally accepted way?
Q. In the HashiCorp stack, where’s the appropriate place to add users?
I’m in the process of building some custom Linux images using HashiCorp’s Packer, which will later be deployed to Azure using HashiCorp’s Terraform. The VMs created from these images will need to have a set of users created; in this particular case I’m concerning myself with user accounts for employees that may need to ssh into these VMs.
For this kind of configuration, does it make more sense to add these user accounts to the base image in the Packer script, or to add them when the VM is created via Terraform? It seems to me that handling it in Packer makes more sense, but are there reasons not to do it there?
Q. How should we automatically rebuild immutable infrastructure when new packages are available?
We’re going to be using Terraform to automate our infrastructure deployment and Packer to create the machine images deployed by Terraform. By following immutable infrastructure design principles, we will implement patching by creating a new image with the patch applied and then redeploy our infrastructure.
With this setup, are there any additional tools we can use to automatically detect when a package or the OS itself in our base image needs updating and trigger the build pipeline?
Chef Automate seems close to what I’m looking for, however, it seems to scan running nodes for compliance rather than analyze the image manifest itself.
Q. Deploying to VSphere with Packer and/or Terraform?
Scenario:
Creating multiple VMs and deploying them to VSphere. Current development uses Packer and Ansible to provision a Fusion VM, with the aim of using Terraform to deploy to VSphere.
Issue:
I’ve been having loads of issues with uploading Fusion VMs to VSphere and having to use work arounds such as this, and been wondering if there is a better of doing this.
Question:
I’m questioning the use of Packer in this scenario (which deploys to Fusion), and doesn’t seem to currently support a Packer->ESXi (VSphere) builder unless it’s an ISO.
Would it be possible to remove Packer entirely from this scenario, so instead of building a Fusion image, provisioning the image, and uploading it to VSphere, use Terraform to deploy an ESXi image (template), and provision that image in-place using Ansible as a provisioner? (such that everything is done in VSphere rather than Fusion then VSphere).
Q. What are reasons for using HashiCorp’s packer to build docker images instead of using docker build?
I have read https://www.packer.io/docs/builders/docker.html, but I do not see the advantage of using Packer over docker build or docker-compose up –build for building docker images
Q. Storing Meta Data for Apps/tools
Have created a number of images via packer and soon coming to the conclusion that binaries/images soon rack up in qty and need to find away to store config data.
We are going to use ansible to hold config data for apps and was wondering about how people manage versions of binaries, versions of images, ingress egress data etc etc. I wondered if anyone knew of either a regularly used/industry approach to managing app data, versioning, config data etc. My default thought was a scheema to use in ansible, or perhaps a tool that can manage this data.
A combo of packer images, binaries and other images I am beginning to think that these files could increase in numbers v v quickly. How do people manage this info??
Q. Baking Immutable Images
I am working on a project and we are trying to bake immutable images on containers as IaC. These images can have J2EE, .NET or Python apps on it. All OS patches should be applied to this image frequently and app updates should be also updated on this ready to run images. Having said that, an option we discuss includes Terraform for provisioning, Packer for building images. Many internet resources deploy additionally Ansible, Chef or Puppet into this mix. My question why Packer is not enough to cover baking images why would I need to consider Ansible/Chef/Puppet additionally?
Q. Is it possible to feed packer with an ansible encrypted file?
I’m attempting to use packer from an ansible playbook in order to use an ansible-vault encrypted file in the build process.
Specifically, I have an “autounnattend.xml” answer file for automated windows. I’d like to encrypt this file with ansible-vault encrypt, and then use this file in a packer template, that looks like this:
source “vsphere-iso” “windows” {
# truncated for brevity
floppy_files = [
“[insert DEcrypted autounnattend.xml file here]”,
“./scripts/winrm.bat”,
“./scripts/Install-VMWareTools.ps1”,
“./drivers/”
]
}
Is this possible? Or is there a different way, I can use run-time decrypted files in my packer build?
Q. Why packer is returning the error “typing a boot command (code, down) 82, false” while trying to build a centos7 vm for Vsphere?
Does anyone know how could I troubleshoot this booting packer error: Error running boot command: error typing a boot command (code, down) 82, false: ServerFaultCode: Permission to perform this operation was denied.
I am trying to build a centos7 vm for Vsphere. I am using vSphere Client version 6.7.0.46000 and packer 1.6.6 on macos. Thank you in advance for any help.
Q. How do I use one provisioner for multiple Packer builds in an HCL-formatted template?
I’m using Packer to provision a VM for my project’s CI pipeline, which is hosted on a supported cloud provider. My provisioning scripts are a little complicated, so I need to iterate on them carefully to get them right. To save money I’m using a local image builder with the same provisioners as used in the cloud builder. This local image won’t be used in production, or even in development; it just serves to help me validate my provisioning scripts (and the resulting environment).
Since I’m testing my provisioner scripts, I want to share one provisioner block with all relevant build blocks. However, I can’t figure out for the life of me how to do that; for now, I’ve been copying and pasting my provisioner block. The only field is the only field that varies, as I don’t usually want to build the local and cloud images at the same time. How can I use one provisioner block within multiple build blocks in an HCL-formatted template, plus the occasional override?
Q. How to remotely update AMI id in Jenkins EC2 plugin?
I have been looking for a way to update AMI id in Jenkins EC2 plugin configuration after a packer build is run. After some digging, I found a promising way to do it IF the packer run is done by Jenkins itself, via postbuild groovy plugin (mind you, this remains to be tested, but looks good)
However I would like to be able to run the packer build anywhere, and have the resulting AMI id updated at Jenkins remotely, presumably via authenticated REST? Is that possible? Where would I start looking?
Q. How to download a VM image from GCP?
I do not see a download button. I would like to download a VM image that was created on GCP using Packer and I would like to run it locally in Virtualbox.
Q. How to end-to-end provision a virtual machine including OS on ESXi standalone using Terraform?
For a small environment I’m tasked to create automated infrastructure deployment for a couple of virtual machines running on a single ESXi host (without vCenter). The VMs should run CentOS 8 and I will use ansible later on to configure the services.
In the past I regularly used packer to build my OS image, which I can do with a standalone host as well. It will not be a template but a VM, but it works fine to create my base template. Without vCenter I cannot copy or clone the VM though as this operation is not supported on a standalone host.
How can I create (fully automated) a VM with CentOS 8 installed ideally by using terraform only? I was thinking to call packer from terraform, but its really not designed to work this way.
Q. How to provide ssh_username when using packer to build a windows AMI
When I run packer build -var aws_access_key=$AWS_ACCESS_KEY_ID -var aws_secret_key=$AWS_SECRET_ACCESS_KEY windows-2012.json
I got this error:
1 error(s) occurred:
- An ssh_username must be specified
Note: some builders used to default ssh_username to “root”.
However there is no native support of ssh in windows 2012. So how can I come up with a ssh_username?
When I was using terraform to build the server, I used WinRM protocol. Can I instruct packer to use WinRM?
Q. Removing install user with Packer
When a VM is first created, it gets an install user that is used to run the provisioning. I want to remove this user at the last step because it’s not necessarily secure and it’s unnecessary. However, Packer runs all of the provisioners as this user. I’ve tried using Ansible, but it still seems to be using this user in some capacity and thus the Ansible playbook cannot actually remove it without failing (saying that there programs still running as the given user). Rather than bumble around, I’m asking if anyone has any ideas as to how to achieve this goal, which should be simple and has turned out not to be
Q. Data disappears after launching EC2 instance from AMI
I am launching an EC2 instance out of an AMI that i am building using packer . In packer build i specify a provisioning shell script which will mount the drives as well as get some files from S3 bucket into one of this directories which has been mounted.
I can see that build script is working fine without any errors. But when i lauch an EC2 instance from the newly created AMI image, i am not able to see the S3 data that i have extracted and put in directory during provisioning with packer.
Q. Is there any way to provision bare-metal with Packer?
Can Packer be used to install and provision a bare metal server? Packer provides webserver with repository packages and preseed/kickstart and can run some other provision softwares(ansible,puppet, chef, etc). Could it be used to install bare metal servers? If yes, how should a packer .json look like?
Q. What is the best workflow to build and test your aws opsworks chef cookbooks locally?
For months I’ve been struggling to find the best workflow for building and testing my aws opsworks cookbooks locally prior to pushing to opsworks.
After a lot of stalled attempts I found a blog post by Mike Greiling and have since settled on an environment that works well for me. I’d like to share the setup/configuration because there are a lot of moving pieces.
Q. Amazon Linux builds using Packer?
I’ve been trying to chase this down but evidently have not found the right documentation. It it possible (and how) to build Amazon Linux machines with Packer? Would this be just mirror AMI’s that are prebuilt if so? I know that Amazon Linux does not have an ISO per-se so this would not work, so I am trying to figure out the methodology around how this is accomplished.
Q. Packer with amazon-ebs VS amazon-instance
I am looking into using Packer to generate some of our VMs, and I have been working thorough the example here. When I try to run the packer build command I get the following error:
==> amazon-ebs: Error launching source instance: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. (VPCResourceNotSpecified)
I resolved this issue (see edit), but I while digging I found this page stating that I can also use an amazon instance, but it recommends using the amazon-ebs build instead.
My question is, are there any drawbacks from using amazon-instance over amazon-ebs, or vice versa? It seems as if ebs will be much easier to spin up and maintain. Is that the case? Do I lose anything by using one or the other?
Edit The issue I was running into was not related to permissions, but having an instance_type of “t2.micro” instead of “m3.medium”. I would still like to know the drawbacks of ebs vs instance though.
Q. Any way to re-run the chef-solo provisioner on a packer built machine?
I’m building VirtualBox machines using Packer and the chef-solo provisioner. Is there a way to re-run chef from within the VM as recipes are updated without needing to re-run packer build?
Q. Active Directory, create user just for adding computers to the domain
I’m a linux admin by trade, and my new job has me managing windows servers.
I’m trying to create a windows server 2012 base image using packer. As part of the provisioning, the VM needs to be connected to active directory via a script. Obviously I don’t want to put my personal password into the script.
Is it possible to create a user in Active Directory who has rights to bind a machine to AD, but can’t perform any other actions (for compliance)?
Q. How do I set locale when building an Ubuntu Docker image with Packer?
I’m using Packer to build a Docker image based on Ubuntu 14.04, i.e., in my Packer template I have:
“builders”: [{
“type”: “docker”,
“image”: “ubuntu”,
“commit”: true
}],
and I build it using:
$ packer build my.json
What do I need to put in the template to get a specific locale (say en_GB) to be set when I subsequently run the following?
$ sudo docker run %IMAGE_ID% locale
Q. Deploy packer images on bare metal server?
How can I match my dev and production bare metal box?
The hosting provider does not allow or have a process to install custom images and only provides a selection from all the latest popular distributions.
KVM is available but they charge extra per usage and the engineer monitors the custom image installation from start to finish. So it’s not the quick and repeatable process that is needed for fast deployment and iteration cycles using packer.
Q. How to use terraform.io to change the image of a stateful server without downtime or data loss?
Say I have application servers, database servers, and a few dns-round-robin load balancers. All this powered by images created with Packer with deployment managed with Terraform. How do I change the image of the database servers without nuking their data when the instances get destroyed and recreated?
The simplest thing I can think of would be to turn off writes, snapshot the database, and then restore the snapshot to the new servers. But it feels really wrong to rely on manual fiddling like that, and it also feels wrong to take the service down for a simple upgrade. There is a cleaner and better way, right?
Q. Best practice for unattended upgrades on immutable servers
I use packer to build immutable Ubuntu 20.04 servers.
How can it work smoothly with unattended upgrades?
Since the image is not bundled like it was in the past the updates do not apply to new instances. It means that when a server comes up unattended upgrades will need to run full upgrades. This is problematic because some of them requires reboot + it prolongs the server get-up time.
Q. How does using packer and terraform in a CD pipeline compare to docker images built from git?
At work we have a Go CD pipeline for producing docker images, and scheduling docker containers with rancher.
This works quite well. You can commit a change to the Docker image in Git, the pipeline will pick it up, and upgrade the container. (The CD pipeline tags the version of the docker image with the build number of the pipeline, and this provides continuity throughout the upgrade.)
We have a similar arrangement with AWS AMIs using Hashicorp Packer and terraform. You can make a change to a packer json file in git, the pipeline will rebuild a new AMI. Given the user’s approval in Go (we have some stage-gates in Go) this can then in theory to stop the existing ec2 instance and start a new one based on the AMI that was built. (The ID of the AMI is passed through the pipeline.)
In practice terraform doesn’t work quite as well in a pipeline. (Compared to docker). Sometimes it is just the sheer complexity of things associated with an EC2 instance and corresponding VPC/routing arrangements can tangle it. Sometimes it is that you need to custom-write your own tests to see if the EC2 instance was stopped, and then started.
Related video:
- An Introduction of GitLab Duo - December 22, 2024
- Best Hospitals for affordable surgery for medical tourism - December 20, 2024
- Top Global Medical Tourism Companies in the World - December 20, 2024