Почему Puppet Deployment RPM не удается, даже если пакет доступен?

474
Nathan Basanese

Мы убедились, что пакет доступен, и даже вручную загрузили его и установили на одном из целевых серверов.

Однако когда мы запускаем Puppet для установки наших обновленных пакетов REST, мы получаем следующую ошибку:

err: /Stage[main]/zone_v1::Packages/Package[prod-connect]/ensure: change from 6.27.2-35935 to 6.27.2-36212 failed: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y install prod-connect-6.27.2-36212' returned 1: Error: Nothing to do 

Это не ошибка в Fabric, Puppet или RPM-хранилище. Кажется, что-то настроено неправильно на локальном компьютере, на котором Fabric подключается к команде Puppet.

2

1 ответ на вопрос

1
Nathan Basanese

//, So we looked into the install issue the next morning and were able to successfully continue with the puppet update on the machines in our test zone to install the new RPMs and start up the servers fine.

We think the issue is that the yum cache on the target servers was not refreshed to know about the build that was made available for prod-connect-6.27.2-36212 and therefore failed to installed.

Looking at the deploy job log output, it may have appeared that that command was done only for a few machines, when really some just missed the package.

This situation has come up before when a build that was pushed to our RPM repository would not appear to a machine with it attempted to ‘yum install’.

The solution was to issue a ‘yum clean all’ command so the machine would refresh its local repository metadata, and therefore “see” the newly pushed build.

This would normally not be an issue if there was a longer period between when engineering team uploads to our RPM repository and when we attempt the deployment. The reason for this is that CEntOS 6 refreshes its local repository metadata automatically on a regular schedule.

The solution: Make sure to add, if not already there, in the deployment fabric script, a step that would ‘yum clean all’ for machines in ALL zones.

This should hopefully avoid this issue were we make a build available and immediately want to deploy it to a cluster.

Похожие вопросы