112 Commits

Author SHA1 Message Date
林玮 (Jade Lin)
90d9a60e41 Make the external url of cache server configurable if necessary
All checks were successful
checks / check and test (push) Successful in 26s
checks / check and test (pull_request) Successful in 14s
2025-08-05 16:54:38 +02:00
badhezi
9924aea786 Evaluate run-name field for workflows (#137)
All checks were successful
checks / check and test (push) Successful in 18s
To support https://github.com/go-gitea/gitea/pull/34301

Reviewed-on: https://gitea.com/gitea/act/pulls/137
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: badhezi <zlilaharon@gmail.com>
Co-committed-by: badhezi <zlilaharon@gmail.com>
2025-05-12 17:17:50 +00:00
Jack Jackson
65c232c4a5 Parse permissions (#133)
Resurrecting [this PR](https://gitea.com/gitea/act/pulls/73) as the original author has [lost motivation](https://github.com/go-gitea/gitea/pull/25664#issuecomment-2737099259) (though, to be clear - all credit belongs to them, all mistakes are mine and mine alone!)

Co-authored-by: Søren L. Hansen <sorenisanerd@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/133
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jack Jackson <scubbojj@gmail.com>
Co-committed-by: Jack Jackson <scubbojj@gmail.com>
2025-03-24 18:17:06 +00:00
Guillaume S.
5da4954b65 fix handle missing yaml.ScalarNode (#129)
This bug was reported on https://github.com/go-gitea/gitea/issues/33657
Rewrite of (see below) was missing in this commit 6cdf1e5788
```go
case string:
    acts[act] = []string{b}
```

Reviewed-on: https://gitea.com/gitea/act/pulls/129
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Guillaume S. <me@gsvd.dev>
Co-committed-by: Guillaume S. <me@gsvd.dev>
2025-02-26 06:19:02 +00:00
Zettat123
ec091ad269 Support concurrency (#124)
To support `concurrency` syntax for Gitea Actions

Gitea PR: https://github.com/go-gitea/gitea/pull/32751

Reviewed-on: https://gitea.com/gitea/act/pulls/124
Reviewed-by: Lunny Xiao <lunny@noreply.gitea.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2025-02-11 02:51:48 +00:00
Zettat123
1656206765 Improve the support for reusable workflows (#122)
Fix [#32439](https://github.com/go-gitea/gitea/issues/32439)

- Support reusable workflows with conditional jobs
- Support nesting reusable workflows

Reviewed-on: https://gitea.com/gitea/act/pulls/122
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Jason Song <wolfogre@noreply.gitea.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-11-23 14:14:17 +00:00
Lunny Xiao
6cdf1e5788 Fix ParseRawOn sequence problem (#119)
Fix https://gitea.com/gitea/act/actions/runs/277/jobs/0

Reviewed-on: https://gitea.com/gitea/act/pulls/119
2024-10-05 19:29:55 +00:00
Lunny Xiao
ab381649da Add parsing for workflow dispatch (#118)
Reviewed-on: https://gitea.com/gitea/act/pulls/118
2024-10-03 02:56:58 +00:00
Jason Song
38e7e9e939 Use hashed uses string as cache dir name (#117)
Reviewed-on: https://gitea.com/gitea/act/pulls/117
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-09-24 06:53:41 +00:00
Zettat123
2ab806053c Check all job results when calling reusable workflows (#116)
Fix [#31900](https://github.com/go-gitea/gitea/issues/31900)

Reviewed-on: https://gitea.com/gitea/act/pulls/116
Reviewed-by: Jason Song <wolfogre@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-09-24 06:52:45 +00:00
Zettat123
6a090f67e5 Support some GITEA_ environment variables (#112)
Some checks failed
checks / check and test (push) Failing after 17m16s
Fix https://gitea.com/gitea/act_runner/issues/575

Reviewed-on: https://gitea.com/gitea/act/pulls/112
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-07-29 04:17:45 +00:00
Jason Song
517d11c671 Reduce log noise (#108)
Cannot guarantee that all noisy logs can be removed at once.

Comment them instead of removing them to make it easier to merge upstream.

What have been removed in this PR are those that are very very long and almost unreadable logs, like

<img width="839" alt="image" src="/attachments/b59e1dcc-4edd-4f81-b939-83dcc45f2ed2">

Reviewed-on: https://gitea.com/gitea/act/pulls/108
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:55:46 +00:00
Jason Song
e1b1e81124 Revert "Pass 'sleep' as container command rather than entrypoint (#86)" (#107)
This reverts #86.

Some images use a custom entry point for specific usage, then `[entrypoint] [cmd]` like `helm /bin/sleep 1` will failed.

It causes https://gitea.com/gitea/helm-chart/actions/runs/755 since the image is `alpine/helm`.

```yaml
  check-and-test:
    runs-on: ubuntu-latest
    container: alpine/helm:3.14.3
```

Reviewed-on: https://gitea.com/gitea/act/pulls/107
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:53:28 +00:00
Zettat123
64876e3696 Interpolate job name with matrix (#106)
Fix https://github.com/go-gitea/gitea/issues/28207

Reviewed-on: https://gitea.com/gitea/act/pulls/106
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-04-07 03:34:53 +00:00
Jason Song
3fa1dba92b Merge tag 'nektos/v0.2.61' 2024-04-01 14:23:16 +08:00
GitHub Actions
361b7e9f1a chore: bump VERSION to 0.2.61 2024-04-01 02:16:09 +00:00
Zettat123
9725f60394 Support reusing workflows with absolute URLs (#104)
Resolve https://gitea.com/gitea/act_runner/issues/507

Reviewed-on: https://gitea.com/gitea/act/pulls/104
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-29 06:15:28 +00:00
ChristopherHX
f825e42ce2 fix: cache adjust restore order of exact key matches (#2267)
* wip: adjust restore order

* fixup

* add tests

* cleanup

* fix typo

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2024-03-29 02:07:20 +00:00
Jason Collins
d9a19c8b02 Trivial: reduce log spam. (#2256)
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:28:48 +00:00
James Kang
3949d74af5 chore: remove repetitive words (#2259)
Signed-off-by: majorteach <csgcgl@126.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:14:53 +00:00
Thomas E Lackey
a79d81989f Pass 'sleep' as container command rather than entrypoint (#86)
The current code overrides the container's entrypoint with `sleep`.  Unfortunately, that prevents initialization scripts, such as to initialize Docker-in-Docker, from running.

The change simply moves the `sleep` to the command, rather than entrypoint, directive.

For most containers of this sort, the entrypoint script performs initialization, and then ends with `$@` to execute whatever command is passed.

If the container has no entrypoint, the command is executed directly.  As a result, this should be a transparent change for most use cases, while allowing the container's entrypoint to be used when present.

Reviewed-on: https://gitea.com/gitea/act/pulls/86
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-27 10:17:48 +00:00
Zettat123
655f578563 Remove the network when there is no service (#103)
Reviewed-on: https://gitea.com/gitea/act/pulls/103
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-27 10:07:29 +00:00
Zettat123
0054a45d1b Fix bugs related to services (#100)
Related to #99

- use `networkNameForGitea` function instead of `networkName` to get network name
- add the missing `Cmd` and `AutoRemove` when creating service containers

Reviewed-on: https://gitea.com/gitea/act/pulls/100
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-26 10:14:06 +00:00
Jason Song
79a7577c15 Merge tag 'nektos/v0.2.60' 2024-03-25 16:58:11 +08:00
Jason Song
a28ebf0a48 Improve workflows (#98)
Starting from setup-go v4, it will cache build dependencies by default, see https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs.

Also bump some versions.

Reviewed-on: https://gitea.com/gitea/act/pulls/98
2024-03-25 15:54:51 +08:00
Jason Song
2b860ce371 Remove emojis in command outputs (#97)
Remove emojis in command outputs; leave others since they don't matter.

Help https://github.com/go-gitea/gitea/pull/29777

Reviewed-on: https://gitea.com/gitea/act/pulls/97
2024-03-25 15:54:39 +08:00
Zettat123
3a9e7d18de Support cloning remote actions from insecure Gitea instances (#92)
Related to https://github.com/go-gitea/gitea/issues/28693

Reviewed-on: https://gitea.com/gitea/act/pulls/92
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-25 15:54:09 +08:00
Claudio Nicora
b4edc952d9 Patched options() to let container options propagate to job containers (#80)
This PR let "general" container config to be propagated to each job container.

See:
- https://gitea.com/gitea/act_runner/issues/265#issuecomment-744382
- https://gitea.com/gitea/act_runner/issues/79
- https://gitea.com/gitea/act_runner/issues/378

Reviewed-on: https://gitea.com/gitea/act/pulls/80
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Claudio Nicora <claudio.nicora@gmail.com>
Co-committed-by: Claudio Nicora <claudio.nicora@gmail.com>
2024-03-25 15:43:14 +08:00
sillyguodong
f1213213d8 Make runs-on support variable expression (#91)
Partial implementation of https://gitea.com/gitea/act_runner/issues/445, the Gitea side also needs a PR for the entire functionality.
Gitea side: https://github.com/go-gitea/gitea/pull/29468

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/91
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2024-03-25 15:42:59 +08:00
Jason Song
15045b4fc0 Merge pull request 'Fix panic in extractFromImageEnv' (#81) from wolfogre/act:bugfix/panic_extractFromImageEnv into main
Reviewed-on: https://gitea.com/gitea/act/pulls/81
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-10-31 14:48:40 +00:00
Jason Song
67918333fa fix: panic 2023-10-31 22:32:21 +08:00
Lunny Xiao
c93462e19f Merge pull request 'bump nektos to 0.2.52' (#79) from bump-nektos into main
Reviewed-on: https://gitea.com/gitea/act/pulls/79
Reviewed-by: John Olheiser <john+gitea@jolheiser.com>
2023-10-13 01:20:21 +00:00
techknowlogick
f3264cac20 Merge remote-tracking branch 'upstream/master' into bump-nektos 2023-10-11 15:28:38 -04:00
techknowlogick
4699c3b689 Merge nektos/act/v0.2.51 2023-09-24 15:09:26 -04:00
Jason Song
22d91e3ac3 Merge tag 'nektos/v0.2.49'
Conflicts:
	cmd/input.go
	go.mod
	go.sum
	pkg/exprparser/interpreter.go
	pkg/model/workflow.go
	pkg/runner/expression.go
	pkg/runner/job_executor.go
	pkg/runner/runner.go
2023-08-02 11:52:14 +08:00
sillyguodong
cdc6d4bc6a Support expression in uses (#75)
Since actions can specify the download source via a url prefix. The prefix may contain some sensitive information that needs to be stored in secrets or variable context, so we need to interpolate the expression value for`uses` firstly.

Reviewed-on: https://gitea.com/gitea/act/pulls/75
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-07-17 03:46:34 +00:00
Zettat123
2069b04779 Fix missed ValidVolumes for docker steps (#74)
Fixes https://gitea.com/gitea/act_runner/issues/277

Thanks @ChristopherHX for finding the cause of the bug.

Reviewed-on: https://gitea.com/gitea/act/pulls/74
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-07-11 02:08:22 +00:00
sati.ac
3813f40cba use remoteAction.URL if not empty (#71)
Fixes https://github.com/go-gitea/gitea/issues/25615

Reviewed-on: https://gitea.com/gitea/act/pulls/71
Co-authored-by: sati.ac <sati.ac@noreply.gitea.com>
Co-committed-by: sati.ac <sati.ac@noreply.gitea.com>
2023-07-03 03:43:44 +00:00
Jason Song
eb19987893 Revert "Support for multiple default URLs for getting actions (#58)" (#70)
Follow https://github.com/go-gitea/gitea/pull/25581 .

Reviewed-on: https://gitea.com/gitea/act/pulls/70
2023-06-30 07:45:13 +00:00
Zettat123
545802b97b Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255

Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-06-28 02:27:12 +00:00
Tomasz Duda
515c2c429d fix action cloning, set correct server_url for act_runner exec (#68)
1. Newest act is not able to clone action based on --default-actions-url
It might be side effect of https://gitea.com/gitea/act/pulls/67.
2. Set correct server_url, api_url, graphql_url for act_runner exec

Reviewed-on: https://gitea.com/gitea/act/pulls/68
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Tomasz Duda <tomaszduda23@gmail.com>
Co-committed-by: Tomasz Duda <tomaszduda23@gmail.com>
2023-06-20 07:36:10 +00:00
Zettat123
a165e17878 Add support for glob syntax when checking volumes (#64)
Follow #60

Reviewed-on: https://gitea.com/gitea/act/pulls/64
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-06-16 05:24:01 +00:00
Zettat123
56e103b4ba Fix the missing URL when using remote reusable workflow (#67)
Reviewed-on: https://gitea.com/gitea/act/pulls/67
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-06-16 05:12:43 +00:00
Marius Zwicker
422cbdf446 Allow to override location of action cache dir (#65)
Adds an explicit config option to specify the directory
below which action contents will be cached. If left empty
the previous location at `$XDG_CACHE_HOME/act` or
`$HOME/.cache/act` will be used respectively.

Required to resolve gitea/act_runner#235

Co-authored-by: Marius Zwicker <marius@mlba-team.de>
Reviewed-on: https://gitea.com/gitea/act/pulls/65
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Marius Zwicker <emzeat@noreply.gitea.com>
Co-committed-by: Marius Zwicker <emzeat@noreply.gitea.com>
2023-06-16 03:41:39 +00:00
Jason Song
8c56bd3aa5 Merge tag 'nektos/v0.2.46' 2023-06-16 11:08:39 +08:00
a1012112796
a94498b482 fix local workflow for act_runner exec (#63)
by the way, export `ACT_SKIP_CHECKOUT` as a env verb for user to do some special config of local test.

example usage:

7a3ab0fdbc

Reviewed-on: https://gitea.com/gitea/act/pulls/63
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-committed-by: a1012112796 <1012112796@qq.com>
2023-06-13 03:46:26 +00:00
sillyguodong
fe76a035ad Follow upstream support for variables (#66)
Because the upstream [PR](https://github.com/nektos/act/pull/1833) already supports variables, so this PR revert #43 (commit de529139af), and cherry-pick commit [6ce45e3](6ce45e3f24).

Co-authored-by: Kuan Yong <wong0514@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/66
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-06-12 06:54:17 +00:00
sillyguodong
6ce5c93cc8 Put the job container name into the env context (#62)
Related: https://gitea.com/gitea/act_runner/issues/189#issuecomment-740636
Refer to [Docker Doc](https://docs.docker.com/engine/reference/commandline/run/#volumes-from), the `--volumes-from` flag is used when running or creating a new container and takes the name or ID of the container from which you want to share volumes. Here's the syntax:
```
docker run --volumes-from <container_name_or_id> <image>
```
So put the job container name into the `env` context in this PR.

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/62
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-06-06 00:21:31 +00:00
Zettat123
92b4d73376 Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.

Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)

In addition, volumes specified by `options` will also be checked.

Currently, the following default volumes (see a72822b3f8/pkg/runner/run_context.go (L116-L166)) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-06-05 09:21:59 +00:00
Zettat123
183bb7af1b Support for multiple default URLs for getting actions (#58)
Partially resolve https://github.com/go-gitea/gitea/issues/24789.

`act_runner`  needs to be improved to parse `gitea_default_actions_url` after this PR merged (https://gitea.com/gitea/act_runner/pulls/200)

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/58
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-06-05 09:07:17 +00:00
sillyguodong
a72822b3f8 Fix some options issue. (#59)
- Fix https://gitea.com/gitea/act_runner/issues/220
ignore `--network` and `--net` in `options`.
- Fix https://gitea.com/gitea/act_runner/issues/222
add opts of `mergo.WithAppendSlice` when excute `mergo.Merge()`.

Reviewed-on: https://gitea.com/gitea/act/pulls/59
Reviewed-by: Zettat123 <zettat123@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-05-31 10:33:39 +00:00
sillyguodong
9283cfc9b1 Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177

#### changes:
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if  `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
  - If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`.  Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
  - Because on the one hand,  `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
  - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore  the stage of `docker start`, because the name of these containers and netwoks won't be repeat.

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-05-16 14:03:55 +08:00
Zettat123
27846050ae Force privileged to false when runner's config is false (#57)
The runner's `privileged` config can be bypassed. Currently, even if the runner's `privileged` config is false, users can still enable the privileged mode by using `--privileged` in the container's option string. Therefore, if runner's config is false, the `--privileged` in options string should be ignored.

Reviewed-on: https://gitea.com/gitea/act/pulls/57
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-05-16 11:21:18 +08:00
Zettat123
ed9b6643ca Do not set the default network to host (#55)
In [nektos/act/pull/1739](https://github.com/nektos/act/pull/1739), the container network mode defaults to `host` if the network option isn't specified in `options`.  When calling `ConnectToNetwork`, the `host` network mode may cause the error:
`Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network`
see the code: a94a01bff2/pkg/container/docker_run.go (L51-L68)

To avoid the error, this logic needs to be removed to keep the default network mode as `bridge`.

Reviewed-on: https://gitea.com/gitea/act/pulls/55
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-05-09 16:41:31 +08:00
Jason Song
a94a01bff2 Fix regression after merging upstream (#54)
Related to 229dbaf153

Reviewed-on: https://gitea.com/gitea/act/pulls/54
2023-05-04 17:54:09 +08:00
Jason Song
229dbaf153 Merge tag 'nektos/v0.2.45' 2023-05-04 17:45:53 +08:00
Zettat123
a18648ee73 Support services credentials (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: 0c1f2edb99/pkg/runner/run_context.go (L228-L269)

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-25 14:45:39 +08:00
sillyguodong
518d8c96f3 Keep the order of on when parsing workflow (#46)
Keep the order of `on` when parsing workflow, and fix the occasional unit test failure of `actions` like https://gitea.com/gitea/act/actions/runs/68

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/46
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-04-24 23:16:41 +08:00
Zettat123
0c1f2edb99 Support specifying command for services (#50)
This PR is to support overwriting the default `CMD` command of `services` containers.

This is a Gitea specific feature and GitHub Actions doesn't support this syntax.

Reviewed-on: https://gitea.com/gitea/act/pulls/50
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-23 14:55:17 +08:00
Zettat123
721857e4a0 Remove empty steps when decoding Job (#49)
Follow #48
Empty steps are invalid, so remove them when decoding `Job` from YAML.

Reviewed-on: https://gitea.com/gitea/act/pulls/49
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-21 20:21:15 +08:00
Zettat123
6b1010ad07 Fix potential panic caused by nil Step (#48)
```yml
jobs:
  job1:
    steps:
      - run: echo HelloWorld
      - # empty step
```

If a job contains an empty step, `Job.Steps` will have a nil element and will cause panic when calling `Step.String()`.

See [the code of gitea](948a9ee5e8/models/actions/task.go (L300-L301))

Reviewed-on: https://gitea.com/gitea/act/pulls/48
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-21 14:45:38 +08:00
Zettat123
e12252a43a Support intepolation for env of services (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-20 16:24:31 +08:00
Zettat123
8609522aa4 Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-19 21:53:57 +08:00
Zettat123
6a876c4f99 Add go build tag to docker_network.go (#44)
Fix the build failure in https://gitea.com/gitea/act_runner/actions/runs/278/jobs/0

Reviewed-on: https://gitea.com/gitea/act/pulls/44
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-19 16:19:38 +08:00
sillyguodong
de529139af Support configuration variables (#43)
related to: https://gitea.com/gitea/act_runner/issues/127

This PR make `act` support the expression like `${{ vars.YOUR_CUSTOM_VARIABLES }}`.

Reviewed-on: https://gitea.com/gitea/act/pulls/43
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-04-19 15:22:56 +08:00
Zettat123
d3a56cdb69 Support services (#42)
Replace #5

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-19 11:23:28 +08:00
Galen Abell
9bdddf18e0 Parse secret inputs in reusable workflows (#41)
Secrets can be passed to reusable workflows, either explicitly by key or
implicitly by `inherit`:

https://docs.github.com/en/actions/using-workflows/reusing-workflows#using-inputs-and-secrets-in-a-reusable-workflow

Reviewed-on: https://gitea.com/gitea/act/pulls/41
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Galen Abell <galen@galenabell.com>
Co-committed-by: Galen Abell <galen@galenabell.com>
2023-04-17 13:41:02 +08:00
Zettat123
ac1ba34518 Fix incorrect job result status (#40)
Fix [#24039(GitHub)](https://github.com/go-gitea/gitea/issues/24039)

At present, if a job fails in the `Set up job`, the result status of the job will still be `success`. The reason is that the `pre` steps don't call `SetJobError`, so the `jobError` will be nil when `post` steps setting the job result. See 5c4a96bcb7/pkg/runner/job_executor.go (L99)

Reviewed-on: https://gitea.com/gitea/act/pulls/40
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-14 15:42:03 +08:00
Jason Song
5c4a96bcb7 Avoid using log.Fatal in pkg/* (#39)
Follow https://github.com/nektos/act/pull/1705

Reviewed-on: https://gitea.com/gitea/act/pulls/39
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-04-07 16:31:03 +08:00
Zettat123
62abf4fe11 Add token for getting reusable workflows from local private repos (#38)
Partially fixes https://gitea.com/gitea/act_runner/issues/91

If the repository is private, we need to provide the token to the caller workflows to access the called reusable workflows from the same repository.

Reviewed-on: https://gitea.com/gitea/act/pulls/38
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-06 14:16:20 +08:00
Zettat123
cfedc518ca Add With field to jobparser.Job (#37)
Partially Fixes [gitea/act_runner#91 comment](https://gitea.com/gitea/act_runner/issues/91#issuecomment-734544)

nektos/act has added `With` to support reusable workflows (see [code](68c72b9a51/pkg/model/workflow.go (L160)))

GitHub actions also support [`jobs.<job_id>.with`](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idwith)

Reviewed-on: https://gitea.com/gitea/act/pulls/37
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-04-04 10:59:53 +08:00
Zettat123
5e76853b55 Support reusable workflow (#34)
Fix https://gitea.com/gitea/act_runner/issues/80
Fix https://gitea.com/gitea/act_runner/issues/85

To support reusable workflows, I made some improvements:
- read `yml` files from both `.gitea/workflows` and `.github/workflows`
- clone repository for local reusable workflows because the runner doesn't have the code in its local directory
- fix the incorrect clone url like `https://https://gitea.com`

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/34
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-03-29 13:59:22 +08:00
Jason Song
2eb4de02ee Expose SetJob to make EraseNeeds work (#35)
Related to #33

Reviewed-on: https://gitea.com/gitea/act/pulls/35
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-29 13:57:29 +08:00
Jason Song
342ad6a51a Keep the order of jobs in the workflow file when parsing (#33)
Keep the order of jobs in the workflow file when parsing, and it will make it possible for Gitea to show jobs in the original order on UI.

Reviewed-on: https://gitea.com/gitea/act/pulls/33
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-28 11:38:40 +08:00
Jason Song
568f053723 Revert "Erase needs of job in SingleWorkflow (#9)" (#32)
This reverts commit 1ba076d321.

`EraseNeeds` Shouldn't be used in `jobparser.Parse`, it's for 023e61e678/models/actions/run.go (L200)

Or Gitea won't be able to get `Needs` of jobs.

Reviewed-on: https://gitea.com/gitea/act/pulls/32
Reviewed-by: Zettat123 <zettat123@noreply.gitea.io>
2023-03-27 17:46:50 +08:00
Bo-Yi.Wu
8f12a6c947 chore: update go-git dependency in go.mod (#30)
- Replace `go-git` with a forked version in `go.mod`

Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/30
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
2023-03-26 21:27:19 +08:00
Lunny Xiao
83fb85f702 Fix bug (#31)
Reviewed-on: https://gitea.com/gitea/act/pulls/31
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-26 21:01:46 +08:00
Zettat123
3daf313205 chore(yaml): Improve ParseRawOn (#28)
See [act_runner #71 comment](https://gitea.com/gitea/act_runner/issues/71#issuecomment-733806), we need to handle `nil interface{}` in `ParseRawOn` function

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/28
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: appleboy <appleboy.tw@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-03-25 12:13:50 +08:00
Lunny Xiao
7c5400d75b ParseRawOn support schedules (#29)
Fix gitea/act_runner#71

Reviewed-on: https://gitea.com/gitea/act/pulls/29
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Zettat123 <zettat123@noreply.gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-24 20:15:46 +08:00
Jason Song
929ea6df75 Support gitea context (#27)
And we will be able to use context like `${{ gitea.repository }}` in workflows yaml files, it's same as `${{ github.repository }}`

Reviewed-on: https://gitea.com/gitea/act/pulls/27
Reviewed-by: Zettat123 <zettat123@noreply.gitea.io>
2023-03-23 12:14:28 +08:00
Zettat123
f6a8a0e643 Add extra path env for running go actions (#26)
At present, the runner can't run go actions even if the go environment has been set by the `setup-go` action. The reason is that `setup-go` will add the go related paths to [`GITHUB_PATH`](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path) but in #22 I forgot to apply them before running go actions. After adding the `ApplyExtraPath` function, the `setup-go` action runs properly.

Reviewed-on: https://gitea.com/gitea/act/pulls/26
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-03-21 15:31:30 +08:00
a1012112796
556fd20aed make sure special logs be sent to gitea's server (#25)
example:
https://gitea.com/a1012112796/test_action/actions/runs/7

![image](/attachments/a8931f2f-096f-41fd-8f9f-0c8322ee985a)

TODO: special handle them on ui

Signed-off-by: a1012112796 <1012112796@qq.com>

Reviewed-on: https://gitea.com/gitea/act/pulls/25
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-committed-by: a1012112796 <1012112796@qq.com>
2023-03-17 23:01:31 +08:00
Jason Song
a8298365fe Fix incompatibility caused by tracking upstream add actions to test it (#24)
Reviewed-on: https://gitea.com/gitea/act/pulls/24
2023-03-16 15:00:11 +08:00
Jason Song
1dda0aec69 Merge tag 'nektos/v0.2.43'
Conflicts:
	pkg/container/docker_run.go
	pkg/runner/action.go
	pkg/runner/logger.go
	pkg/runner/run_context.go
	pkg/runner/runner.go
	pkg/runner/step_action_remote_test.go
2023-03-16 11:45:29 +08:00
Jason Song
49e204166d Update forking fules (#23)
As the title.

Reviewed-on: https://gitea.com/gitea/act/pulls/23
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-16 10:46:41 +08:00
Zettat123
a36b003f7a Improve running with go (#22)
Close #21

I have tested this PR and run Go actions successfully on:
- Windows host
- Docker on Windows
- Linux host
- Docker on Linux

Before running Go actions, we need to make sure that Go has been installed on the host or the Docker image.

Reviewed-on: https://gitea.com/gitea/act/pulls/22
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2023-03-14 16:55:36 +08:00
Zettat123
0671d16694 Fix missing ActionRunsUsingGo (#20)
- Allow `using: "go"` when unmarshalling YAML.
- Add `ActionRunsUsingGo` to returned errors.

Co-authored-by: Zettat123 <zettat123@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/20
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@noreply.gitea.io>
Co-committed-by: Zettat123 <zettat123@noreply.gitea.io>
2023-03-09 22:51:58 +08:00
a1012112796
881dbdb81b make log level configable (#19)
relatd: https://gitea.com/gitea/act_runner/pulls/39
Reviewed-on: https://gitea.com/gitea/act/pulls/19
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-committed-by: a1012112796 <1012112796@qq.com>
2023-03-08 14:46:39 +08:00
Jason Song
1252e551b8 Replace more strings.ReplaceAll to safeFilename (#18)
Follow #16 #17

Reviewed-on: https://gitea.com/gitea/act/pulls/18
2023-02-24 14:20:34 +08:00
Jason Song
c614d8b96c Replace more strings.ReplaceAll to safeFilename (#17)
Follow #16.

Reviewed-on: https://gitea.com/gitea/act/pulls/17
2023-02-24 12:11:30 +08:00
Jason Song
84b6649b8b Safe filename (#16)
Fix #15.

Reviewed-on: https://gitea.com/gitea/act/pulls/16
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-02-24 10:17:36 +08:00
Jason Song
dca7801682 Support uses http(s)://host/owner/repo as actions (#14)
Examples:

```yaml
jobs:
  my_first_job:
    steps:
      - name: My first step
        uses: https://gitea.com/actions/heroku@main
      - name: My second step
        uses: http://example.com/actions/heroku@v2.0.1
```

Reviewed-on: https://gitea.com/gitea/act/pulls/14
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-02-15 16:28:33 +08:00
Lunny Xiao
4b99ed8916 Support go run on action (#12)
Reviewed-on: https://gitea.com/gitea/act/pulls/12
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-15 16:10:15 +08:00
Lunny Xiao
e46ede1b17 parse raw on (#11)
Reviewed-on: https://gitea.com/gitea/act/pulls/11
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-committed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-31 15:49:55 +08:00
Jason Song
1ba076d321 Erase needs of job in SingleWorkflow (#9)
Reviewed-on: https://gitea.com/gitea/act/pulls/9
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-01-30 11:42:19 +08:00
appleboy
0efa2d5e63 fix(test): needs condition. (#8)
as title.

Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>

Co-authored-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/8
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-21 17:09:51 +08:00
Jason Song
0a37a03f2e Clone actions without token (#6)
Shouldn't provide token when cloning actions, the token comes from the instance which triggered the task, it might be not the instance which provides actions.

For GitHub, they are the same, always github.com. But for Gitea, tasks triggered by a.com can clone actions from b.com.

Reviewed-on: https://gitea.com/gitea/act/pulls/6
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-01-06 13:34:38 +08:00
appleboy
88cce47022 feat(workflow): support schedule event (#4)
fix https://gitea.com/gitea/act/issues/3

Signed-off-by: Bo-Yi.Wu <appleboy.tw@gmail.com>

Co-authored-by: Bo-Yi.Wu <appleboy.tw@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/4
2022-12-10 09:14:14 +08:00
Jason Song
7920109e89 Merge tag 'nektos/v0.2.34' 2022-12-05 17:08:17 +08:00
Jason Song
4cacc14d22 feat: adjust container name format (#1)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/1
2022-11-24 14:45:32 +08:00
Jason Song
c6b8548d35 feat: support PlatformPicker 2022-11-22 16:39:19 +08:00
Jason Song
64cae197a4 Support step number 2022-11-22 16:11:35 +08:00
Jason Song
7fb84a54a8 chore: update LICENSE 2022-11-22 15:26:02 +08:00
Jason Song
70cc6c017b docs: add naming rule for git ref 2022-11-22 15:05:12 +08:00
Lunny Xiao
d7e9ea75fc disable graphql url because gitea doesn't support that 2022-11-22 14:42:48 +08:00
Jason Song
b9c20dcaa4 feat: support more options of containers 2022-11-22 14:42:12 +08:00
Jason Song
97629ae8af fix: set logger with trace level 2022-11-22 14:41:57 +08:00
Lunny Xiao
b9a9812ad9 Fix API 2022-11-22 14:22:03 +08:00
Lunny Xiao
113c3e98fb support bot site 2022-11-22 14:17:06 +08:00
Jason Song
7815eec33b Add custom enhancements 2022-11-22 14:16:35 +08:00
Jason Song
c051090583 Add description of branchs 2022-11-22 14:02:01 +08:00
fuxiaohei
0fa1fe0310 feat: add logger hook for standalone job logger 2022-11-22 14:00:13 +08:00
62 changed files with 3244 additions and 131 deletions

21
.gitea/workflows/test.yml Normal file
View File

@@ -0,0 +1,21 @@
name: checks
on:
- push
- pull_request
jobs:
lint:
name: check and test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
- name: vet checks
run: go vet -v ./...
- name: build
run: go build -v ./...
- name: test
run: | # Test only the new packages in this fork. Add more packages as needed.
go test -v ./pkg/jobparser

1
.gitignore vendored
View File

@@ -31,3 +31,4 @@ coverage.txt
# megalinter
report/
act

View File

@@ -26,7 +26,7 @@
## Images based on [`actions/virtual-environments`][gh/actions/virtual-environments]
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to to use it.**
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to use it.**
| Image | Size | GitHub Repository |
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------------------- |

View File

@@ -1,5 +1,6 @@
MIT License
Copyright (c) 2022 The Gitea Authors
Copyright (c) 2019
Permission is hereby granted, free of charge, to any person obtaining a copy

View File

@@ -1,3 +1,28 @@
## Forking rules
This is a custom fork of [nektos/act](https://github.com/nektos/act/), for the purpose of serving [act_runner](https://gitea.com/gitea/act_runner).
It cannot be used as command line tool anymore, but only as a library.
It's a soft fork, which means that it will tracking the latest release of nektos/act.
Branches:
- `main`: default branch, contains custom changes, based on the latest release(not the latest of the master branch) of nektos/act.
- `nektos/master`: mirror for the master branch of nektos/act.
Tags:
- `nektos/vX.Y.Z`: mirror for `vX.Y.Z` of [nektos/act](https://github.com/nektos/act/).
- `vX.YZ.*`: based on `nektos/vX.Y.Z`, contains custom changes.
- Examples:
- `nektos/v0.2.23` -> `v0.223.*`
- `nektos/v0.3.1` -> `v0.301.*`, not ~~`v0.31.*`~~
- `nektos/v0.10.1` -> `v0.1001.*`, not ~~`v0.101.*`~~
- `nektos/v0.3.100` -> not ~~`v0.3100.*`~~, I don't think it's really going to happen, if it does, we can find a way to handle it.
---
![act-logo](https://raw.githubusercontent.com/wiki/nektos/act/img/logo-150.png)
# Overview [![push](https://github.com/nektos/act/workflows/push/badge.svg?branch=master&event=push)](https://github.com/nektos/act/actions) [![Join the chat at https://gitter.im/nektos/act](https://badges.gitter.im/nektos/act.svg)](https://gitter.im/nektos/act?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/github.com/nektos/act)](https://goreportcard.com/report/github.com/nektos/act) [![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)

View File

@@ -1 +1 @@
0.2.60
0.2.61

View File

@@ -46,6 +46,7 @@ type Input struct {
artifactServerPort string
noCacheServer bool
cacheServerPath string
cacheServerAdvertiseURL string
cacheServerAddr string
cacheServerPort uint16
jsonLogger bool

View File

@@ -94,6 +94,7 @@ func Execute(ctx context.Context, version string) {
rootCmd.PersistentFlags().BoolVarP(&input.noSkipCheckout, "no-skip-checkout", "", false, "Do not skip actions/checkout")
rootCmd.PersistentFlags().BoolVarP(&input.noCacheServer, "no-cache-server", "", false, "Disable cache server")
rootCmd.PersistentFlags().StringVarP(&input.cacheServerPath, "cache-server-path", "", filepath.Join(CacheHomeDir, "actcache"), "Defines the path where the cache server stores caches.")
rootCmd.PersistentFlags().StringVarP(&input.cacheServerAdvertiseURL, "cache-server-advertise-url", "", "", "Defines the URL for advertising the cache server behind a proxy. e.g.: https://act-cache-server.example.com")
rootCmd.PersistentFlags().StringVarP(&input.cacheServerAddr, "cache-server-addr", "", common.GetOutboundIP().String(), "Defines the address to which the cache server binds.")
rootCmd.PersistentFlags().Uint16VarP(&input.cacheServerPort, "cache-server-port", "", 0, "Defines the port where the artifact server listens. 0 means a randomly available port.")
rootCmd.PersistentFlags().StringVarP(&input.actionCachePath, "action-cache-path", "", filepath.Join(CacheHomeDir, "act"), "Defines the path where the actions get cached and host workspaces created.")
@@ -598,7 +599,7 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
var cacheHandler *artifactcache.Handler
if !input.noCacheServer && envs[cacheURLKey] == "" {
var err error
cacheHandler, err = artifactcache.StartHandler(input.cacheServerPath, input.cacheServerAddr, input.cacheServerPort, common.Logger(ctx))
cacheHandler, err = artifactcache.StartHandler(input.cacheServerPath, input.cacheServerAdvertiseURL, input.cacheServerAddr, input.cacheServerPort, common.Logger(ctx))
if err != nil {
return err
}

1
go.mod
View File

@@ -14,6 +14,7 @@ require (
github.com/docker/go-connections v0.4.0
github.com/go-git/go-billy/v5 v5.5.0
github.com/go-git/go-git/v5 v5.11.0
github.com/gobwas/glob v0.2.3
github.com/imdario/mergo v0.3.16
github.com/joho/godotenv v1.5.1
github.com/julienschmidt/httprouter v1.3.0

2
go.sum
View File

@@ -63,6 +63,8 @@ github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgF
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
github.com/go-git/go-git/v5 v5.11.0 h1:XIZc1p+8YzypNr34itUfSvYJcv+eYdTnTvOZ2vD3cA4=
github.com/go-git/go-git/v5 v5.11.0/go.mod h1:6GFcX2P3NM7FPBfpePbpLd21XxsgdAt+lKqXmCUiUCY=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=

View File

@@ -38,10 +38,11 @@ type Handler struct {
gcing atomic.Bool
gcAt time.Time
outboundIP string
outboundIP string
advertiseURL string
}
func StartHandler(dir, outboundIP string, port uint16, logger logrus.FieldLogger) (*Handler, error) {
func StartHandler(dir, advertiseURL, outboundIP string, port uint16, logger logrus.FieldLogger) (*Handler, error) {
h := &Handler{}
if logger == nil {
@@ -71,6 +72,8 @@ func StartHandler(dir, outboundIP string, port uint16, logger logrus.FieldLogger
}
h.storage = storage
h.advertiseURL = advertiseURL
if outboundIP != "" {
h.outboundIP = outboundIP
} else if ip := common.GetOutboundIP(); ip == nil {
@@ -111,10 +114,13 @@ func StartHandler(dir, outboundIP string, port uint16, logger logrus.FieldLogger
}
func (h *Handler) ExternalURL() string {
// TODO: make the external url configurable if necessary
return fmt.Sprintf("http://%s:%d",
h.outboundIP,
h.listener.Addr().(*net.TCPAddr).Port)
if h.advertiseURL != "" {
return h.advertiseURL
} else {
return fmt.Sprintf("http://%s:%d",
h.outboundIP,
h.listener.Addr().(*net.TCPAddr).Port)
}
}
func (h *Handler) Close() error {
@@ -352,6 +358,17 @@ func (h *Handler) middleware(handler httprouter.Handle) httprouter.Handle {
func findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
cache := &Cache{}
for _, prefix := range keys {
// if a key in the list matches exactly, don't return partial matches
if err := db.FindOne(cache,
bolthold.Where("Key").Eq(prefix).
And("Version").Eq(version).
And("Complete").Eq(true).
SortBy("CreatedAt").Reverse()); err == nil || !errors.Is(err, bolthold.ErrNotFound) {
if err != nil {
return nil, fmt.Errorf("find cache: %w", err)
}
return cache, nil
}
prefixPattern := fmt.Sprintf("^%s", regexp.QuoteMeta(prefix))
re, err := regexp.Compile(prefixPattern)
if err != nil {

View File

@@ -20,7 +20,7 @@ import (
func TestHandler(t *testing.T) {
dir := filepath.Join(t.TempDir(), "artifactcache")
handler, err := StartHandler(dir, "", 0, nil)
handler, err := StartHandler(dir, "", "", 0, nil)
require.NoError(t, err)
base := fmt.Sprintf("%s%s", handler.ExternalURL(), urlBase)
@@ -422,6 +422,110 @@ func TestHandler(t *testing.T) {
assert.Equal(t, key+"_abc", got.CacheKey)
}
})
t.Run("exact keys are preferred (key 0)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
t.Run("exact keys are preferred (key 1)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
"------------------------------------------------------",
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `------------------------------------------------------` doesn't match any caches.
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
}
func uploadCacheNormally(t *testing.T, base, key, version string, content []byte) {
@@ -485,7 +589,7 @@ func uploadCacheNormally(t *testing.T, base, key, version string, content []byte
func TestHandler_gcCache(t *testing.T) {
dir := filepath.Join(t.TempDir(), "artifactcache")
handler, err := StartHandler(dir, "", 0, nil)
handler, err := StartHandler(dir, "", "", 0, nil)
require.NoError(t, err)
defer func() {

View File

@@ -97,7 +97,7 @@ func NewParallelExecutor(parallel int, executors ...Executor) Executor {
errs := make(chan error, len(executors))
if 1 > parallel {
log.Infof("Parallel tasks (%d) below minimum, setting to 1", parallel)
log.Debugf("Parallel tasks (%d) below minimum, setting to 1", parallel)
parallel = 1
}

View File

@@ -226,6 +226,9 @@ type NewGitCloneExecutorInput struct {
Dir string
Token string
OfflineMode bool
// For Gitea
InsecureSkipTLS bool
}
// CloneIfRequired ...
@@ -247,6 +250,8 @@ func CloneIfRequired(ctx context.Context, refName plumbing.ReferenceName, input
cloneOptions := git.CloneOptions{
URL: input.URL,
Progress: progressWriter,
InsecureSkipTLS: input.InsecureSkipTLS, // For Gitea
}
if input.Token != "" {
cloneOptions.Auth = &http.BasicAuth{
@@ -308,6 +313,11 @@ func NewGitCloneExecutor(input NewGitCloneExecutorInput) common.Executor {
// fetch latest changes
fetchOptions, pullOptions := gitOptions(input.Token)
if input.InsecureSkipTLS { // For Gitea
fetchOptions.InsecureSkipTLS = true
pullOptions.InsecureSkipTLS = true
}
if !isOfflineMode {
err = r.Fetch(&fetchOptions)
if err != nil && !errors.Is(err, git.NoErrAlreadyUpToDate) {

View File

@@ -25,3 +25,24 @@ func Logger(ctx context.Context) logrus.FieldLogger {
func WithLogger(ctx context.Context, logger logrus.FieldLogger) context.Context {
return context.WithValue(ctx, loggerContextKeyVal, logger)
}
type loggerHookKey string
const loggerHookKeyVal = loggerHookKey("logrus.Hook")
// LoggerHook returns the appropriate logger hook for current context
// the hook affects job logger, not global logger
func LoggerHook(ctx context.Context) logrus.Hook {
val := ctx.Value(loggerHookKeyVal)
if val != nil {
if hook, ok := val.(logrus.Hook); ok {
return hook
}
}
return nil
}
// WithLoggerHook adds a value to the context for the logger hook
func WithLoggerHook(ctx context.Context, hook logrus.Hook) context.Context {
return context.WithValue(ctx, loggerHookKeyVal, hook)
}

View File

@@ -30,6 +30,10 @@ type NewContainerInput struct {
NetworkAliases []string
ExposedPorts nat.PortSet
PortBindings nat.PortMap
// Gitea specific
AutoRemove bool
ValidVolumes []string
}
// FileEntry is a file to copy to a container
@@ -42,6 +46,7 @@ type FileEntry struct {
// Container for managing docker run containers
type Container interface {
Create(capAdd []string, capDrop []string) common.Executor
ConnectToNetwork(name string) common.Executor
Copy(destPath string, files ...*FileEntry) common.Executor
CopyTarStream(ctx context.Context, destPath string, tarStream io.Reader) error
CopyDir(destPath string, srcPath string, useGitIgnore bool) common.Executor

View File

@@ -6,6 +6,7 @@ import (
"context"
"github.com/docker/docker/api/types"
"github.com/nektos/act/pkg/common"
)
@@ -22,7 +23,8 @@ func NewDockerNetworkCreateExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
common.Logger(ctx).Debugf("Network %v exists", name)
@@ -56,7 +58,8 @@ func NewDockerNetworkRemoveExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{})

View File

@@ -17,16 +17,19 @@ import (
"strings"
"github.com/Masterminds/semver"
"github.com/docker/cli/cli/compose/loader"
"github.com/docker/cli/cli/connhelper"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
networktypes "github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
"github.com/go-git/go-billy/v5/helper/polyfill"
"github.com/go-git/go-billy/v5/osfs"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/gobwas/glob"
"github.com/imdario/mergo"
"github.com/joho/godotenv"
"github.com/kballard/go-shellquote"
@@ -45,6 +48,25 @@ func NewContainer(input *NewContainerInput) ExecutionsEnvironment {
return cr
}
func (cr *containerReference) ConnectToNetwork(name string) common.Executor {
return common.
NewDebugExecutor("%sdocker network connect %s %s", logPrefix, name, cr.input.Name).
Then(
common.NewPipelineExecutor(
cr.connect(),
cr.connectToNetwork(name, cr.input.NetworkAliases),
).IfNot(common.Dryrun),
)
}
func (cr *containerReference) connectToNetwork(name string, aliases []string) common.Executor {
return func(ctx context.Context) error {
return cr.cli.NetworkConnect(ctx, name, cr.input.Name, &networktypes.EndpointSettings{
Aliases: aliases,
})
}
}
// supportsContainerImagePlatform returns true if the underlying Docker server
// API version is 1.41 and beyond
func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) bool {
@@ -346,12 +368,32 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
return nil, nil, fmt.Errorf("Cannot parse container options: '%s': '%w'", input.Options, err)
}
// FIXME: If everything is fine after gitea/act v0.260.0, remove the following comment.
// In the old fork version, the code is
// if len(copts.netMode.Value()) == 0 {
// if err = copts.netMode.Set("host"); err != nil {
// return nil, nil, fmt.Errorf("Cannot parse networkmode=host. This is an internal error and should not happen: '%w'", err)
// }
// }
// And it has been commented with:
// If a service container's network is set to `host`, the container will not be able to
// connect to the specified network created for the job container and the service containers.
// So comment out the following code.
// Not the if it's necessary to comment it in the new version,
// since it's cr.input.NetworkMode now.
if len(copts.netMode.Value()) == 0 {
if err = copts.netMode.Set(cr.input.NetworkMode); err != nil {
return nil, nil, fmt.Errorf("Cannot parse networkmode=%s. This is an internal error and should not happen: '%w'", cr.input.NetworkMode, err)
}
}
// If the `privileged` config has been disabled, `copts.privileged` need to be forced to false,
// even if the user specifies `--privileged` in the options string.
if !hostConfig.Privileged {
copts.privileged = false
}
containerConfig, err := parse(flags, copts, runtime.GOOS)
if err != nil {
return nil, nil, fmt.Errorf("Cannot process container options: '%s': '%w'", input.Options, err)
@@ -359,7 +401,7 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
logger.Debugf("Custom container.Config from options ==> %+v", containerConfig.Config)
err = mergo.Merge(config, containerConfig.Config, mergo.WithOverride)
err = mergo.Merge(config, containerConfig.Config, mergo.WithOverride, mergo.WithAppendSlice)
if err != nil {
return nil, nil, fmt.Errorf("Cannot merge container.Config options: '%s': '%w'", input.Options, err)
}
@@ -371,12 +413,17 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
hostConfig.Mounts = append(hostConfig.Mounts, containerConfig.HostConfig.Mounts...)
binds := hostConfig.Binds
mounts := hostConfig.Mounts
networkMode := hostConfig.NetworkMode
err = mergo.Merge(hostConfig, containerConfig.HostConfig, mergo.WithOverride)
if err != nil {
return nil, nil, fmt.Errorf("Cannot merge container.HostConfig options: '%s': '%w'", input.Options, err)
}
hostConfig.Binds = binds
hostConfig.Mounts = mounts
if len(copts.netMode.Value()) > 0 {
logger.Warn("--network and --net in the options will be ignored.")
}
hostConfig.NetworkMode = networkMode
logger.Debugf("Merged container.HostConfig ==> %+v", hostConfig)
return config, hostConfig, nil
@@ -398,7 +445,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
ExposedPorts: input.ExposedPorts,
Tty: isTerminal,
}
logger.Debugf("Common container.Config ==> %+v", config)
// For Gitea, reduce log noise
// logger.Debugf("Common container.Config ==> %+v", config)
if len(input.Cmd) != 0 {
config.Cmd = input.Cmd
@@ -440,16 +488,22 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
Privileged: input.Privileged,
UsernsMode: container.UsernsMode(input.UsernsMode),
PortBindings: input.PortBindings,
AutoRemove: input.AutoRemove,
}
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
// For Gitea, reduce log noise
// logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
config, hostConfig, err := cr.mergeContainerConfigs(ctx, config, hostConfig)
if err != nil {
return err
}
// For Gitea
config, hostConfig = cr.sanitizeConfig(ctx, config, hostConfig)
var networkingConfig *network.NetworkingConfig
logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
// For Gitea, reduce log noise
// logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
n := hostConfig.NetworkMode
// IsUserDefined and IsHost are broken on windows
if n.IsUserDefined() && n != "host" && len(input.NetworkAliases) > 0 {
@@ -876,3 +930,63 @@ func (cr *containerReference) wait() common.Executor {
return fmt.Errorf("exit with `FAILURE`: %v", statusCode)
}
}
// For Gitea
// sanitizeConfig remove the invalid configurations from `config` and `hostConfig`
func (cr *containerReference) sanitizeConfig(ctx context.Context, config *container.Config, hostConfig *container.HostConfig) (*container.Config, *container.HostConfig) {
logger := common.Logger(ctx)
if len(cr.input.ValidVolumes) > 0 {
globs := make([]glob.Glob, 0, len(cr.input.ValidVolumes))
for _, v := range cr.input.ValidVolumes {
if g, err := glob.Compile(v); err != nil {
logger.Errorf("create glob from %s error: %v", v, err)
} else {
globs = append(globs, g)
}
}
isValid := func(v string) bool {
for _, g := range globs {
if g.Match(v) {
return true
}
}
return false
}
// sanitize binds
sanitizedBinds := make([]string, 0, len(hostConfig.Binds))
for _, bind := range hostConfig.Binds {
parsed, err := loader.ParseVolume(bind)
if err != nil {
logger.Warnf("parse volume [%s] error: %v", bind, err)
continue
}
if parsed.Source == "" {
// anonymous volume
sanitizedBinds = append(sanitizedBinds, bind)
continue
}
if isValid(parsed.Source) {
sanitizedBinds = append(sanitizedBinds, bind)
} else {
logger.Warnf("[%s] is not a valid volume, will be ignored", parsed.Source)
}
}
hostConfig.Binds = sanitizedBinds
// sanitize mounts
sanitizedMounts := make([]mount.Mount, 0, len(hostConfig.Mounts))
for _, mt := range hostConfig.Mounts {
if isValid(mt.Source) {
sanitizedMounts = append(sanitizedMounts, mt)
} else {
logger.Warnf("[%s] is not a valid volume, will be ignored", mt.Source)
}
}
hostConfig.Mounts = sanitizedMounts
} else {
hostConfig.Binds = []string{}
hostConfig.Mounts = []mount.Mount{}
}
return config, hostConfig
}

View File

@@ -11,8 +11,12 @@ import (
"testing"
"time"
"github.com/nektos/act/pkg/common"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
"github.com/sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
@@ -246,3 +250,76 @@ func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
// Type assert containerReference implements ExecutionsEnvironment
var _ ExecutionsEnvironment = &containerReference{}
func TestCheckVolumes(t *testing.T) {
testCases := []struct {
desc string
validVolumes []string
binds []string
expectedBinds []string
}{
{
desc: "match all volumes",
validVolumes: []string{"**"},
binds: []string{
"shared_volume:/shared_volume",
"/home/test/data:/test_data",
"/etc/conf.d/base.json:/config/base.json",
"sql_data:/sql_data",
"/secrets/keys:/keys",
},
expectedBinds: []string{
"shared_volume:/shared_volume",
"/home/test/data:/test_data",
"/etc/conf.d/base.json:/config/base.json",
"sql_data:/sql_data",
"/secrets/keys:/keys",
},
},
{
desc: "no volumes can be matched",
validVolumes: []string{},
binds: []string{
"shared_volume:/shared_volume",
"/home/test/data:/test_data",
"/etc/conf.d/base.json:/config/base.json",
"sql_data:/sql_data",
"/secrets/keys:/keys",
},
expectedBinds: []string{},
},
{
desc: "only allowed volumes can be matched",
validVolumes: []string{
"shared_volume",
"/home/test/data",
"/etc/conf.d/*.json",
},
binds: []string{
"shared_volume:/shared_volume",
"/home/test/data:/test_data",
"/etc/conf.d/base.json:/config/base.json",
"sql_data:/sql_data",
"/secrets/keys:/keys",
},
expectedBinds: []string{
"shared_volume:/shared_volume",
"/home/test/data:/test_data",
"/etc/conf.d/base.json:/config/base.json",
},
},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
logger, _ := test.NewNullLogger()
ctx := common.WithLogger(context.Background(), logger)
cr := &containerReference{
input: &NewContainerInput{
ValidVolumes: tc.validVolumes,
},
}
_, hostConf := cr.sanitizeConfig(ctx, &container.Config{}, &container.HostConfig{Binds: tc.binds})
assert.Equal(t, tc.expectedBinds, hostConf.Binds)
})
}
}

View File

@@ -41,6 +41,12 @@ func (e *HostEnvironment) Create(_ []string, _ []string) common.Executor {
}
}
func (e *HostEnvironment) ConnectToNetwork(name string) common.Executor {
return func(ctx context.Context) error {
return nil
}
}
func (e *HostEnvironment) Close() common.Executor {
return func(ctx context.Context) error {
return nil

View File

@@ -155,6 +155,8 @@ func (impl *interperterImpl) evaluateVariable(variableNode *actionlint.VariableN
switch strings.ToLower(variableNode.Name) {
case "github":
return impl.env.Github, nil
case "gitea": // compatible with Gitea
return impl.env.Github, nil
case "env":
return impl.env.Env, nil
case "job":

185
pkg/jobparser/evaluator.go Normal file
View File

@@ -0,0 +1,185 @@
package jobparser
import (
"fmt"
"regexp"
"strings"
"github.com/nektos/act/pkg/exprparser"
"gopkg.in/yaml.v3"
)
// ExpressionEvaluator is copied from runner.expressionEvaluator,
// to avoid unnecessary dependencies
type ExpressionEvaluator struct {
interpreter exprparser.Interpreter
}
func NewExpressionEvaluator(interpreter exprparser.Interpreter) *ExpressionEvaluator {
return &ExpressionEvaluator{interpreter: interpreter}
}
func (ee ExpressionEvaluator) evaluate(in string, defaultStatusCheck exprparser.DefaultStatusCheck) (interface{}, error) {
evaluated, err := ee.interpreter.Evaluate(in, defaultStatusCheck)
return evaluated, err
}
func (ee ExpressionEvaluator) evaluateScalarYamlNode(node *yaml.Node) error {
var in string
if err := node.Decode(&in); err != nil {
return err
}
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return nil
}
expr, _ := rewriteSubExpression(in, false)
res, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
if err != nil {
return err
}
return node.Encode(res)
}
func (ee ExpressionEvaluator) evaluateMappingYamlNode(node *yaml.Node) error {
// GitHub has this undocumented feature to merge maps, called insert directive
insertDirective := regexp.MustCompile(`\${{\s*insert\s*}}`)
for i := 0; i < len(node.Content)/2; {
k := node.Content[i*2]
v := node.Content[i*2+1]
if err := ee.EvaluateYamlNode(v); err != nil {
return err
}
var sk string
// Merge the nested map of the insert directive
if k.Decode(&sk) == nil && insertDirective.MatchString(sk) {
node.Content = append(append(node.Content[:i*2], v.Content...), node.Content[(i+1)*2:]...)
i += len(v.Content) / 2
} else {
if err := ee.EvaluateYamlNode(k); err != nil {
return err
}
i++
}
}
return nil
}
func (ee ExpressionEvaluator) evaluateSequenceYamlNode(node *yaml.Node) error {
for i := 0; i < len(node.Content); {
v := node.Content[i]
// Preserve nested sequences
wasseq := v.Kind == yaml.SequenceNode
if err := ee.EvaluateYamlNode(v); err != nil {
return err
}
// GitHub has this undocumented feature to merge sequences / arrays
// We have a nested sequence via evaluation, merge the arrays
if v.Kind == yaml.SequenceNode && !wasseq {
node.Content = append(append(node.Content[:i], v.Content...), node.Content[i+1:]...)
i += len(v.Content)
} else {
i++
}
}
return nil
}
func (ee ExpressionEvaluator) EvaluateYamlNode(node *yaml.Node) error {
switch node.Kind {
case yaml.ScalarNode:
return ee.evaluateScalarYamlNode(node)
case yaml.MappingNode:
return ee.evaluateMappingYamlNode(node)
case yaml.SequenceNode:
return ee.evaluateSequenceYamlNode(node)
default:
return nil
}
}
func (ee ExpressionEvaluator) Interpolate(in string) string {
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return in
}
expr, _ := rewriteSubExpression(in, true)
evaluated, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
if err != nil {
return ""
}
value, ok := evaluated.(string)
if !ok {
panic(fmt.Sprintf("Expression %s did not evaluate to a string", expr))
}
return value
}
func escapeFormatString(in string) string {
return strings.ReplaceAll(strings.ReplaceAll(in, "{", "{{"), "}", "}}")
}
func rewriteSubExpression(in string, forceFormat bool) (string, error) {
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
return in, nil
}
strPattern := regexp.MustCompile("(?:''|[^'])*'")
pos := 0
exprStart := -1
strStart := -1
var results []string
formatOut := ""
for pos < len(in) {
if strStart > -1 {
matches := strPattern.FindStringIndex(in[pos:])
if matches == nil {
panic("unclosed string.")
}
strStart = -1
pos += matches[1]
} else if exprStart > -1 {
exprEnd := strings.Index(in[pos:], "}}")
strStart = strings.Index(in[pos:], "'")
if exprEnd > -1 && strStart > -1 {
if exprEnd < strStart {
strStart = -1
} else {
exprEnd = -1
}
}
if exprEnd > -1 {
formatOut += fmt.Sprintf("{%d}", len(results))
results = append(results, strings.TrimSpace(in[exprStart:pos+exprEnd]))
pos += exprEnd + 2
exprStart = -1
} else if strStart > -1 {
pos += strStart + 1
} else {
panic("unclosed expression.")
}
} else {
exprStart = strings.Index(in[pos:], "${{")
if exprStart != -1 {
formatOut += escapeFormatString(in[pos : pos+exprStart])
exprStart = pos + exprStart + 3
pos = exprStart
} else {
formatOut += escapeFormatString(in[pos:])
pos = len(in)
}
}
}
if len(results) == 1 && formatOut == "{0}" && !forceFormat {
return in, nil
}
out := fmt.Sprintf("format('%s', %s)", strings.ReplaceAll(formatOut, "'", "''"), strings.Join(results, ", "))
return out, nil
}

View File

@@ -0,0 +1,84 @@
package jobparser
import (
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
"gopkg.in/yaml.v3"
)
// NewInterpeter returns an interpeter used in the server,
// need github, needs, strategy, matrix, inputs context only,
// see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability
func NewInterpeter(
jobID string,
job *model.Job,
matrix map[string]interface{},
gitCtx *model.GithubContext,
results map[string]*JobResult,
vars map[string]string,
inputs map[string]interface{},
) exprparser.Interpreter {
strategy := make(map[string]interface{})
if job.Strategy != nil {
strategy["fail-fast"] = job.Strategy.FailFast
strategy["max-parallel"] = job.Strategy.MaxParallel
}
run := &model.Run{
Workflow: &model.Workflow{
Jobs: map[string]*model.Job{},
},
JobID: jobID,
}
for id, result := range results {
need := yaml.Node{}
_ = need.Encode(result.Needs)
run.Workflow.Jobs[id] = &model.Job{
RawNeeds: need,
Result: result.Result,
Outputs: result.Outputs,
}
}
jobs := run.Workflow.Jobs
jobNeeds := run.Job().Needs()
using := map[string]exprparser.Needs{}
for _, need := range jobNeeds {
if v, ok := jobs[need]; ok {
using[need] = exprparser.Needs{
Outputs: v.Outputs,
Result: v.Result,
}
}
}
ee := &exprparser.EvaluationEnvironment{
Github: gitCtx,
Env: nil, // no need
Job: nil, // no need
Steps: nil, // no need
Runner: nil, // no need
Secrets: nil, // no need
Strategy: strategy,
Matrix: matrix,
Needs: using,
Inputs: inputs,
Vars: vars,
}
config := exprparser.Config{
Run: run,
WorkingDir: "", // WorkingDir is used for the function hashFiles, but it's not needed in the server
Context: "job",
}
return exprparser.NewInterpeter(ee, config)
}
// JobResult is the minimum requirement of job results for Interpeter
type JobResult struct {
Needs []string
Result string
Outputs map[string]string
}

168
pkg/jobparser/jobparser.go Normal file
View File

@@ -0,0 +1,168 @@
package jobparser
import (
"bytes"
"fmt"
"sort"
"strings"
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
)
func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
origin, err := model.ReadWorkflow(bytes.NewReader(content))
if err != nil {
return nil, fmt.Errorf("model.ReadWorkflow: %w", err)
}
workflow := &SingleWorkflow{}
if err := yaml.Unmarshal(content, workflow); err != nil {
return nil, fmt.Errorf("yaml.Unmarshal: %w", err)
}
pc := &parseContext{}
for _, o := range options {
o(pc)
}
results := map[string]*JobResult{}
for id, job := range origin.Jobs {
results[id] = &JobResult{
Needs: job.Needs(),
Result: pc.jobResults[id],
Outputs: nil, // not supported yet
}
}
var ret []*SingleWorkflow
ids, jobs, err := workflow.jobs()
if err != nil {
return nil, fmt.Errorf("invalid jobs: %w", err)
}
evaluator := NewExpressionEvaluator(exprparser.NewInterpeter(&exprparser.EvaluationEnvironment{Github: pc.gitContext, Vars: pc.vars}, exprparser.Config{}))
workflow.RunName = evaluator.Interpolate(workflow.RunName)
for i, id := range ids {
job := jobs[i]
matricxes, err := getMatrixes(origin.GetJob(id))
if err != nil {
return nil, fmt.Errorf("getMatrixes: %w", err)
}
for _, matrix := range matricxes {
job := job.Clone()
if job.Name == "" {
job.Name = id
}
job.Strategy.RawMatrix = encodeMatrix(matrix)
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results, pc.vars, nil))
job.Name = nameWithMatrix(job.Name, matrix, evaluator)
runsOn := origin.GetJob(id).RunsOn()
for i, v := range runsOn {
runsOn[i] = evaluator.Interpolate(v)
}
job.RawRunsOn = encodeRunsOn(runsOn)
swf := &SingleWorkflow{
Name: workflow.Name,
RawOn: workflow.RawOn,
Env: workflow.Env,
Defaults: workflow.Defaults,
RawPermissions: workflow.RawPermissions,
RunName: workflow.RunName,
}
if err := swf.SetJob(id, job); err != nil {
return nil, fmt.Errorf("SetJob: %w", err)
}
ret = append(ret, swf)
}
}
return ret, nil
}
func WithJobResults(results map[string]string) ParseOption {
return func(c *parseContext) {
c.jobResults = results
}
}
func WithGitContext(context *model.GithubContext) ParseOption {
return func(c *parseContext) {
c.gitContext = context
}
}
func WithVars(vars map[string]string) ParseOption {
return func(c *parseContext) {
c.vars = vars
}
}
type parseContext struct {
jobResults map[string]string
gitContext *model.GithubContext
vars map[string]string
}
type ParseOption func(c *parseContext)
func getMatrixes(job *model.Job) ([]map[string]interface{}, error) {
ret, err := job.GetMatrixes()
if err != nil {
return nil, fmt.Errorf("GetMatrixes: %w", err)
}
sort.Slice(ret, func(i, j int) bool {
return matrixName(ret[i]) < matrixName(ret[j])
})
return ret, nil
}
func encodeMatrix(matrix map[string]interface{}) yaml.Node {
if len(matrix) == 0 {
return yaml.Node{}
}
value := map[string][]interface{}{}
for k, v := range matrix {
value[k] = []interface{}{v}
}
node := yaml.Node{}
_ = node.Encode(value)
return node
}
func encodeRunsOn(runsOn []string) yaml.Node {
node := yaml.Node{}
if len(runsOn) == 1 {
_ = node.Encode(runsOn[0])
} else {
_ = node.Encode(runsOn)
}
return node
}
func nameWithMatrix(name string, m map[string]interface{}, evaluator *ExpressionEvaluator) string {
if len(m) == 0 {
return name
}
if !strings.Contains(name, "${{") || !strings.Contains(name, "}}") {
return name + " " + matrixName(m)
}
return evaluator.Interpolate(name)
}
func matrixName(m map[string]interface{}) string {
ks := make([]string, 0, len(m))
for k := range m {
ks = append(ks, k)
}
sort.Strings(ks)
vs := make([]string, 0, len(m))
for _, v := range ks {
vs = append(vs, fmt.Sprint(m[v]))
}
return fmt.Sprintf("(%s)", strings.Join(vs, ", "))
}

View File

@@ -0,0 +1,81 @@
package jobparser
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
func TestParse(t *testing.T) {
tests := []struct {
name string
options []ParseOption
wantErr bool
}{
{
name: "multiple_jobs",
options: nil,
wantErr: false,
},
{
name: "multiple_matrix",
options: nil,
wantErr: false,
},
{
name: "has_needs",
options: nil,
wantErr: false,
},
{
name: "has_with",
options: nil,
wantErr: false,
},
{
name: "has_secrets",
options: nil,
wantErr: false,
},
{
name: "empty_step",
options: nil,
wantErr: false,
},
{
name: "job_name_with_matrix",
options: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
content := ReadTestdata(t, tt.name+".in.yaml")
want := ReadTestdata(t, tt.name+".out.yaml")
got, err := Parse(content, tt.options...)
if tt.wantErr {
require.Error(t, err)
}
require.NoError(t, err)
builder := &strings.Builder{}
for _, v := range got {
if builder.Len() > 0 {
builder.WriteString("---\n")
}
encoder := yaml.NewEncoder(builder)
encoder.SetIndent(2)
require.NoError(t, encoder.Encode(v))
id, job := v.Job()
assert.NotEmpty(t, id)
assert.NotNil(t, job)
}
assert.Equal(t, string(want), builder.String())
})
}
}

514
pkg/jobparser/model.go Normal file
View File

@@ -0,0 +1,514 @@
package jobparser
import (
"bytes"
"fmt"
"github.com/nektos/act/pkg/model"
"gopkg.in/yaml.v3"
)
// SingleWorkflow is a workflow with single job and single matrix
type SingleWorkflow struct {
Name string `yaml:"name,omitempty"`
RawOn yaml.Node `yaml:"on,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
RawJobs yaml.Node `yaml:"jobs,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
RawPermissions yaml.Node `yaml:"permissions,omitempty"`
RunName string `yaml:"run-name,omitempty"`
}
func (w *SingleWorkflow) Job() (string, *Job) {
ids, jobs, _ := w.jobs()
if len(ids) >= 1 {
return ids[0], jobs[0]
}
return "", nil
}
func (w *SingleWorkflow) jobs() ([]string, []*Job, error) {
ids, jobs, err := parseMappingNode[*Job](&w.RawJobs)
if err != nil {
return nil, nil, err
}
for _, job := range jobs {
steps := make([]*Step, 0, len(job.Steps))
for _, s := range job.Steps {
if s != nil {
steps = append(steps, s)
}
}
job.Steps = steps
}
return ids, jobs, nil
}
func (w *SingleWorkflow) SetJob(id string, job *Job) error {
m := map[string]*Job{
id: job,
}
out, err := yaml.Marshal(m)
if err != nil {
return err
}
node := yaml.Node{}
if err := yaml.Unmarshal(out, &node); err != nil {
return err
}
if len(node.Content) != 1 || node.Content[0].Kind != yaml.MappingNode {
return fmt.Errorf("can not set job: %q", out)
}
w.RawJobs = *node.Content[0]
return nil
}
func (w *SingleWorkflow) Marshal() ([]byte, error) {
return yaml.Marshal(w)
}
type Job struct {
Name string `yaml:"name,omitempty"`
RawNeeds yaml.Node `yaml:"needs,omitempty"`
RawRunsOn yaml.Node `yaml:"runs-on,omitempty"`
Env yaml.Node `yaml:"env,omitempty"`
If yaml.Node `yaml:"if,omitempty"`
Steps []*Step `yaml:"steps,omitempty"`
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
Services map[string]*ContainerSpec `yaml:"services,omitempty"`
Strategy Strategy `yaml:"strategy,omitempty"`
RawContainer yaml.Node `yaml:"container,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
Outputs map[string]string `yaml:"outputs,omitempty"`
Uses string `yaml:"uses,omitempty"`
With map[string]interface{} `yaml:"with,omitempty"`
RawSecrets yaml.Node `yaml:"secrets,omitempty"`
RawConcurrency *model.RawConcurrency `yaml:"concurrency,omitempty"`
RawPermissions yaml.Node `yaml:"permissions,omitempty"`
}
func (j *Job) Clone() *Job {
if j == nil {
return nil
}
return &Job{
Name: j.Name,
RawNeeds: j.RawNeeds,
RawRunsOn: j.RawRunsOn,
Env: j.Env,
If: j.If,
Steps: j.Steps,
TimeoutMinutes: j.TimeoutMinutes,
Services: j.Services,
Strategy: j.Strategy,
RawContainer: j.RawContainer,
Defaults: j.Defaults,
Outputs: j.Outputs,
Uses: j.Uses,
With: j.With,
RawSecrets: j.RawSecrets,
RawConcurrency: j.RawConcurrency,
RawPermissions: j.RawPermissions,
}
}
func (j *Job) Needs() []string {
return (&model.Job{RawNeeds: j.RawNeeds}).Needs()
}
func (j *Job) EraseNeeds() *Job {
j.RawNeeds = yaml.Node{}
return j
}
func (j *Job) RunsOn() []string {
return (&model.Job{RawRunsOn: j.RawRunsOn}).RunsOn()
}
type Step struct {
ID string `yaml:"id,omitempty"`
If yaml.Node `yaml:"if,omitempty"`
Name string `yaml:"name,omitempty"`
Uses string `yaml:"uses,omitempty"`
Run string `yaml:"run,omitempty"`
WorkingDirectory string `yaml:"working-directory,omitempty"`
Shell string `yaml:"shell,omitempty"`
Env yaml.Node `yaml:"env,omitempty"`
With map[string]string `yaml:"with,omitempty"`
ContinueOnError bool `yaml:"continue-on-error,omitempty"`
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
}
// String gets the name of step
func (s *Step) String() string {
if s == nil {
return ""
}
return (&model.Step{
ID: s.ID,
Name: s.Name,
Uses: s.Uses,
Run: s.Run,
}).String()
}
type ContainerSpec struct {
Image string `yaml:"image,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
Ports []string `yaml:"ports,omitempty"`
Volumes []string `yaml:"volumes,omitempty"`
Options string `yaml:"options,omitempty"`
Credentials map[string]string `yaml:"credentials,omitempty"`
Cmd []string `yaml:"cmd,omitempty"`
}
type Strategy struct {
FailFastString string `yaml:"fail-fast,omitempty"`
MaxParallelString string `yaml:"max-parallel,omitempty"`
RawMatrix yaml.Node `yaml:"matrix,omitempty"`
}
type Defaults struct {
Run RunDefaults `yaml:"run,omitempty"`
}
type RunDefaults struct {
Shell string `yaml:"shell,omitempty"`
WorkingDirectory string `yaml:"working-directory,omitempty"`
}
type WorkflowDispatchInput struct {
Name string `yaml:"name"`
Description string `yaml:"description"`
Required bool `yaml:"required"`
Default string `yaml:"default"`
Type string `yaml:"type"`
Options []string `yaml:"options"`
}
type Event struct {
Name string
acts map[string][]string
schedules []map[string]string
inputs []WorkflowDispatchInput
}
func (evt *Event) IsSchedule() bool {
return evt.schedules != nil
}
func (evt *Event) Acts() map[string][]string {
return evt.acts
}
func (evt *Event) Schedules() []map[string]string {
return evt.schedules
}
func (evt *Event) Inputs() []WorkflowDispatchInput {
return evt.inputs
}
func parseWorkflowDispatchInputs(inputs map[string]interface{}) ([]WorkflowDispatchInput, error) {
var results []WorkflowDispatchInput
for name, input := range inputs {
inputMap, ok := input.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("invalid input: %v", input)
}
input := WorkflowDispatchInput{
Name: name,
}
if desc, ok := inputMap["description"].(string); ok {
input.Description = desc
}
if required, ok := inputMap["required"].(bool); ok {
input.Required = required
}
if defaultVal, ok := inputMap["default"].(string); ok {
input.Default = defaultVal
}
if inputType, ok := inputMap["type"].(string); ok {
input.Type = inputType
}
if options, ok := inputMap["options"].([]string); ok {
input.Options = options
} else if options, ok := inputMap["options"].([]interface{}); ok {
for _, option := range options {
if opt, ok := option.(string); ok {
input.Options = append(input.Options, opt)
}
}
}
results = append(results, input)
}
return results, nil
}
func ReadWorkflowRawConcurrency(content []byte) (*model.RawConcurrency, error) {
w := new(model.Workflow)
err := yaml.NewDecoder(bytes.NewReader(content)).Decode(w)
return w.RawConcurrency, err
}
func EvaluateConcurrency(rc *model.RawConcurrency, jobID string, job *Job, gitCtx map[string]any, results map[string]*JobResult, vars map[string]string, inputs map[string]any) (string, bool, error) {
actJob := &model.Job{}
if job != nil {
actJob.Strategy = &model.Strategy{
FailFastString: job.Strategy.FailFastString,
MaxParallelString: job.Strategy.MaxParallelString,
RawMatrix: job.Strategy.RawMatrix,
}
actJob.Strategy.FailFast = actJob.Strategy.GetFailFast()
actJob.Strategy.MaxParallel = actJob.Strategy.GetMaxParallel()
}
matrix := make(map[string]any)
matrixes, err := actJob.GetMatrixes()
if err != nil {
return "", false, err
}
if len(matrixes) > 0 {
matrix = matrixes[0]
}
evaluator := NewExpressionEvaluator(NewInterpeter(jobID, actJob, matrix, toGitContext(gitCtx), results, vars, inputs))
group := evaluator.Interpolate(rc.Group)
cancelInProgress := evaluator.Interpolate(rc.CancelInProgress)
return group, cancelInProgress == "true", nil
}
func toGitContext(input map[string]any) *model.GithubContext {
gitContext := &model.GithubContext{
EventPath: asString(input["event_path"]),
Workflow: asString(input["workflow"]),
RunID: asString(input["run_id"]),
RunNumber: asString(input["run_number"]),
Actor: asString(input["actor"]),
Repository: asString(input["repository"]),
EventName: asString(input["event_name"]),
Sha: asString(input["sha"]),
Ref: asString(input["ref"]),
RefName: asString(input["ref_name"]),
RefType: asString(input["ref_type"]),
HeadRef: asString(input["head_ref"]),
BaseRef: asString(input["base_ref"]),
Token: asString(input["token"]),
Workspace: asString(input["workspace"]),
Action: asString(input["action"]),
ActionPath: asString(input["action_path"]),
ActionRef: asString(input["action_ref"]),
ActionRepository: asString(input["action_repository"]),
Job: asString(input["job"]),
RepositoryOwner: asString(input["repository_owner"]),
RetentionDays: asString(input["retention_days"]),
}
event, ok := input["event"].(map[string]any)
if ok {
gitContext.Event = event
}
return gitContext
}
func ParseRawOn(rawOn *yaml.Node) ([]*Event, error) {
switch rawOn.Kind {
case yaml.ScalarNode:
var val string
err := rawOn.Decode(&val)
if err != nil {
return nil, err
}
return []*Event{
{Name: val},
}, nil
case yaml.SequenceNode:
var val []interface{}
err := rawOn.Decode(&val)
if err != nil {
return nil, err
}
res := make([]*Event, 0, len(val))
for _, v := range val {
switch t := v.(type) {
case string:
res = append(res, &Event{Name: t})
default:
return nil, fmt.Errorf("invalid type %T", t)
}
}
return res, nil
case yaml.MappingNode:
events, triggers, err := parseMappingNode[yaml.Node](rawOn)
if err != nil {
return nil, err
}
res := make([]*Event, 0, len(events))
for i, k := range events {
v := triggers[i]
switch v.Kind {
case yaml.ScalarNode:
res = append(res, &Event{
Name: k,
})
case yaml.SequenceNode:
var t []interface{}
err := v.Decode(&t)
if err != nil {
return nil, err
}
schedules := make([]map[string]string, len(t))
if k == "schedule" {
for i, tt := range t {
vv, ok := tt.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("unknown on type(schedule): %#v", v)
}
schedules[i] = make(map[string]string, len(vv))
for k, vvv := range vv {
var ok bool
if schedules[i][k], ok = vvv.(string); !ok {
return nil, fmt.Errorf("unknown on type(schedule): %#v", v)
}
}
}
}
if len(schedules) == 0 {
schedules = nil
}
res = append(res, &Event{
Name: k,
schedules: schedules,
})
case yaml.MappingNode:
acts := make(map[string][]string, len(v.Content)/2)
var inputs []WorkflowDispatchInput
expectedKey := true
var act string
for _, content := range v.Content {
if expectedKey {
if content.Kind != yaml.ScalarNode {
return nil, fmt.Errorf("key type not string: %#v", content)
}
act = ""
err := content.Decode(&act)
if err != nil {
return nil, err
}
} else {
switch content.Kind {
case yaml.SequenceNode:
var t []string
err := content.Decode(&t)
if err != nil {
return nil, err
}
acts[act] = t
case yaml.ScalarNode:
var t string
err := content.Decode(&t)
if err != nil {
return nil, err
}
acts[act] = []string{t}
case yaml.MappingNode:
if k != "workflow_dispatch" || act != "inputs" {
return nil, fmt.Errorf("map should only for workflow_dispatch but %s: %#v", act, content)
}
var key string
for i, vv := range content.Content {
if i%2 == 0 {
if vv.Kind != yaml.ScalarNode {
return nil, fmt.Errorf("key type not string: %#v", vv)
}
key = ""
if err := vv.Decode(&key); err != nil {
return nil, err
}
} else {
if vv.Kind != yaml.MappingNode {
return nil, fmt.Errorf("key type not map(%s): %#v", key, vv)
}
input := WorkflowDispatchInput{}
if err := vv.Decode(&input); err != nil {
return nil, err
}
input.Name = key
inputs = append(inputs, input)
}
}
default:
return nil, fmt.Errorf("unknown on type: %#v", content)
}
}
expectedKey = !expectedKey
}
if len(inputs) == 0 {
inputs = nil
}
if len(acts) == 0 {
acts = nil
}
res = append(res, &Event{
Name: k,
acts: acts,
inputs: inputs,
})
default:
return nil, fmt.Errorf("unknown on type: %v", v.Kind)
}
}
return res, nil
default:
return nil, fmt.Errorf("unknown on type: %v", rawOn.Kind)
}
}
// parseMappingNode parse a mapping node and preserve order.
func parseMappingNode[T any](node *yaml.Node) ([]string, []T, error) {
if node.Kind != yaml.MappingNode {
return nil, nil, fmt.Errorf("input node is not a mapping node")
}
var scalars []string
var datas []T
expectKey := true
for _, item := range node.Content {
if expectKey {
if item.Kind != yaml.ScalarNode {
return nil, nil, fmt.Errorf("not a valid scalar node: %v", item.Value)
}
scalars = append(scalars, item.Value)
expectKey = false
} else {
var val T
if err := item.Decode(&val); err != nil {
return nil, nil, err
}
datas = append(datas, val)
expectKey = true
}
}
if len(scalars) != len(datas) {
return nil, nil, fmt.Errorf("invalid definition of on: %v", node.Value)
}
return scalars, datas, nil
}
func asString(v interface{}) string {
if v == nil {
return ""
} else if s, ok := v.(string); ok {
return s
}
return ""
}

372
pkg/jobparser/model_test.go Normal file
View File

@@ -0,0 +1,372 @@
package jobparser
import (
"fmt"
"strings"
"testing"
"github.com/nektos/act/pkg/model"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
func TestParseRawOn(t *testing.T) {
kases := []struct {
input string
result []*Event
}{
{
input: "on: issue_comment",
result: []*Event{
{
Name: "issue_comment",
},
},
},
{
input: "on:\n push",
result: []*Event{
{
Name: "push",
},
},
},
{
input: "on:\n - push\n - pull_request",
result: []*Event{
{
Name: "push",
},
{
Name: "pull_request",
},
},
},
{
input: "on:\n push:\n branches:\n - master",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"branches": {
"master",
},
},
},
},
},
{
input: "on:\n push:\n branches: main",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"branches": {
"main",
},
},
},
},
},
{
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
result: []*Event{
{
Name: "branch_protection_rule",
acts: map[string][]string{
"types": {
"created",
"deleted",
},
},
},
},
},
{
input: "on:\n project:\n types: [created, deleted]\n milestone:\n types: [opened, deleted]",
result: []*Event{
{
Name: "project",
acts: map[string][]string{
"types": {
"created",
"deleted",
},
},
},
{
Name: "milestone",
acts: map[string][]string{
"types": {
"opened",
"deleted",
},
},
},
},
},
{
input: "on:\n pull_request:\n types:\n - opened\n branches:\n - 'releases/**'",
result: []*Event{
{
Name: "pull_request",
acts: map[string][]string{
"types": {
"opened",
},
"branches": {
"releases/**",
},
},
},
},
},
{
input: "on:\n push:\n branches:\n - main\n pull_request:\n types:\n - opened\n branches:\n - '**'",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"branches": {
"main",
},
},
},
{
Name: "pull_request",
acts: map[string][]string{
"types": {
"opened",
},
"branches": {
"**",
},
},
},
},
},
{
input: "on:\n push:\n branches:\n - 'main'\n - 'releases/**'",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"branches": {
"main",
"releases/**",
},
},
},
},
},
{
input: "on:\n push:\n tags:\n - v1.**",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"tags": {
"v1.**",
},
},
},
},
},
{
input: "on: [pull_request, workflow_dispatch]",
result: []*Event{
{
Name: "pull_request",
},
{
Name: "workflow_dispatch",
},
},
},
{
input: "on:\n schedule:\n - cron: '20 6 * * *'",
result: []*Event{
{
Name: "schedule",
schedules: []map[string]string{
{
"cron": "20 6 * * *",
},
},
},
},
},
{
input: `on:
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
type: choice
options:
- info
- warning
- debug
tags:
description: 'Test scenario tags'
required: false
type: boolean
environment:
description: 'Environment to run tests against'
type: environment
required: true
push:
`,
result: []*Event{
{
Name: "workflow_dispatch",
inputs: []WorkflowDispatchInput{
{
Name: "logLevel",
Description: "Log level",
Required: true,
Default: "warning",
Type: "choice",
Options: []string{"info", "warning", "debug"},
},
{
Name: "tags",
Description: "Test scenario tags",
Required: false,
Type: "boolean",
},
{
Name: "environment",
Description: "Environment to run tests against",
Type: "environment",
Required: true,
},
},
},
{
Name: "push",
},
},
},
}
for _, kase := range kases {
t.Run(kase.input, func(t *testing.T) {
origin, err := model.ReadWorkflow(strings.NewReader(kase.input))
assert.NoError(t, err)
events, err := ParseRawOn(&origin.RawOn)
assert.NoError(t, err)
assert.EqualValues(t, kase.result, events, fmt.Sprintf("%#v", events))
})
}
}
func TestSingleWorkflow_SetJob(t *testing.T) {
t.Run("erase needs", func(t *testing.T) {
content := ReadTestdata(t, "erase_needs.in.yaml")
want := ReadTestdata(t, "erase_needs.out.yaml")
swf, err := Parse(content)
require.NoError(t, err)
builder := &strings.Builder{}
for _, v := range swf {
id, job := v.Job()
require.NoError(t, v.SetJob(id, job.EraseNeeds()))
if builder.Len() > 0 {
builder.WriteString("---\n")
}
encoder := yaml.NewEncoder(builder)
encoder.SetIndent(2)
require.NoError(t, encoder.Encode(v))
}
assert.Equal(t, string(want), builder.String())
})
}
func TestParseMappingNode(t *testing.T) {
tests := []struct {
input string
scalars []string
datas []interface{}
}{
{
input: "on:\n push:\n branches:\n - master",
scalars: []string{"push"},
datas: []interface{}{
map[string]interface{}{
"branches": []interface{}{"master"},
},
},
},
{
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
scalars: []string{"branch_protection_rule"},
datas: []interface{}{
map[string]interface{}{
"types": []interface{}{"created", "deleted"},
},
},
},
{
input: "on:\n project:\n types: [created, deleted]\n milestone:\n types: [opened, deleted]",
scalars: []string{"project", "milestone"},
datas: []interface{}{
map[string]interface{}{
"types": []interface{}{"created", "deleted"},
},
map[string]interface{}{
"types": []interface{}{"opened", "deleted"},
},
},
},
{
input: "on:\n pull_request:\n types:\n - opened\n branches:\n - 'releases/**'",
scalars: []string{"pull_request"},
datas: []interface{}{
map[string]interface{}{
"types": []interface{}{"opened"},
"branches": []interface{}{"releases/**"},
},
},
},
{
input: "on:\n push:\n branches:\n - main\n pull_request:\n types:\n - opened\n branches:\n - '**'",
scalars: []string{"push", "pull_request"},
datas: []interface{}{
map[string]interface{}{
"branches": []interface{}{"main"},
},
map[string]interface{}{
"types": []interface{}{"opened"},
"branches": []interface{}{"**"},
},
},
},
{
input: "on:\n schedule:\n - cron: '20 6 * * *'",
scalars: []string{"schedule"},
datas: []interface{}{
[]interface{}{map[string]interface{}{
"cron": "20 6 * * *",
}},
},
},
}
for _, test := range tests {
t.Run(test.input, func(t *testing.T) {
workflow, err := model.ReadWorkflow(strings.NewReader(test.input))
assert.NoError(t, err)
scalars, datas, err := parseMappingNode[interface{}](&workflow.RawOn)
assert.NoError(t, err)
assert.EqualValues(t, test.scalars, scalars, fmt.Sprintf("%#v", scalars))
assert.EqualValues(t, test.datas, datas, fmt.Sprintf("%#v", datas))
})
}
}

View File

@@ -0,0 +1,8 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: echo job-1
-

View File

@@ -0,0 +1,7 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: echo job-1

View File

@@ -0,0 +1,16 @@
name: test
jobs:
job1:
runs-on: linux
steps:
- run: uname -a
job2:
runs-on: linux
steps:
- run: uname -a
needs: job1
job3:
runs-on: linux
steps:
- run: uname -a
needs: [job1, job2]

View File

@@ -0,0 +1,23 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job2:
name: job2
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job3:
name: job3
runs-on: linux
steps:
- run: uname -a

View File

@@ -0,0 +1,16 @@
name: test
jobs:
job1:
runs-on: linux
steps:
- run: uname -a
job2:
runs-on: linux
steps:
- run: uname -a
needs: job1
job3:
runs-on: linux
steps:
- run: uname -a
needs: [job1, job2]

View File

@@ -0,0 +1,25 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job2:
name: job2
needs: job1
runs-on: linux
steps:
- run: uname -a
---
name: test
jobs:
job3:
name: job3
needs: [job1, job2]
runs-on: linux
steps:
- run: uname -a

View File

@@ -0,0 +1,14 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
uses: .gitea/workflows/build.yml
secrets:
secret: hideme
job2:
name: job2
runs-on: linux
uses: .gitea/workflows/build.yml
secrets: inherit

View File

@@ -0,0 +1,16 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
uses: .gitea/workflows/build.yml
secrets:
secret: hideme
---
name: test
jobs:
job2:
name: job2
runs-on: linux
uses: .gitea/workflows/build.yml
secrets: inherit

15
pkg/jobparser/testdata/has_with.in.yaml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
uses: .gitea/workflows/build.yml
with:
package: service
job2:
name: job2
runs-on: linux
uses: .gitea/workflows/build.yml
with:
package: module

View File

@@ -0,0 +1,17 @@
name: test
jobs:
job1:
name: job1
runs-on: linux
uses: .gitea/workflows/build.yml
with:
package: service
---
name: test
jobs:
job2:
name: job2
runs-on: linux
uses: .gitea/workflows/build.yml
with:
package: module

View File

@@ -0,0 +1,14 @@
name: test
jobs:
job1:
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-20.04]
version: [1.17, 1.18, 1.19]
runs-on: ${{ matrix.os }}
name: test_version_${{ matrix.version }}_on_${{ matrix.os }}
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version

View File

@@ -0,0 +1,101 @@
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.19
---
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.19

View File

@@ -0,0 +1,22 @@
name: test
jobs:
zzz:
runs-on: linux
steps:
- run: echo zzz
job1:
runs-on: linux
steps:
- run: uname -a && go version
job2:
runs-on: linux
steps:
- run: uname -a && go version
job3:
runs-on: linux
steps:
- run: uname -a && go version
aaa:
runs-on: linux
steps:
- run: uname -a && go version

View File

@@ -0,0 +1,39 @@
name: test
jobs:
zzz:
name: zzz
runs-on: linux
steps:
- run: echo zzz
---
name: test
jobs:
job1:
name: job1
runs-on: linux
steps:
- run: uname -a && go version
---
name: test
jobs:
job2:
name: job2
runs-on: linux
steps:
- run: uname -a && go version
---
name: test
jobs:
job3:
name: job3
runs-on: linux
steps:
- run: uname -a && go version
---
name: test
jobs:
aaa:
name: aaa
runs-on: linux
steps:
- run: uname -a && go version

View File

@@ -0,0 +1,13 @@
name: test
jobs:
job1:
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-20.04]
version: [1.17, 1.18, 1.19]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version

View File

@@ -0,0 +1,101 @@
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.17)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.17
---
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.18)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.18
---
name: test
jobs:
job1:
name: job1 (ubuntu-20.04, 1.19)
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.19
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.17)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.17
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.18)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.18
---
name: test
jobs:
job1:
name: job1 (ubuntu-22.04, 1.19)
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.19

View File

@@ -0,0 +1,18 @@
package jobparser
import (
"embed"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
//go:embed testdata
var testdata embed.FS
func ReadTestdata(t *testing.T, name string) []byte {
content, err := testdata.ReadFile(filepath.Join("testdata", name))
require.NoError(t, err)
return content
}

View File

@@ -20,7 +20,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
// Force input to lowercase for case insensitive comparison
format := ActionRunsUsing(strings.ToLower(using))
switch format {
case ActionRunsUsingNode20, ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite:
case ActionRunsUsingNode20, ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite, ActionRunsUsingGo:
*a = format
default:
return fmt.Errorf(fmt.Sprintf("The runs.using key in action.yml must be one of: %v, got %s", []string{
@@ -29,6 +29,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
ActionRunsUsingNode12,
ActionRunsUsingNode16,
ActionRunsUsingNode20,
ActionRunsUsingGo,
}, format))
}
return nil
@@ -45,6 +46,8 @@ const (
ActionRunsUsingDocker = "docker"
// ActionRunsUsingComposite for running composite
ActionRunsUsingComposite = "composite"
// ActionRunsUsingGo for running with go
ActionRunsUsingGo = "go"
)
// ActionRuns are a field in Action

View File

@@ -162,6 +162,13 @@ func NewWorkflowPlanner(path string, noWorkflowRecurse bool) (WorkflowPlanner, e
return wp, nil
}
// CombineWorkflowPlanner combines workflows to a WorkflowPlanner
func CombineWorkflowPlanner(workflows ...*Workflow) WorkflowPlanner {
return &workflowPlanner{
workflows: workflows,
}
}
func NewSingleWorkflowPlanner(name string, f io.Reader) (WorkflowPlanner, error) {
wp := new(workflowPlanner)

View File

@@ -1,6 +1,7 @@
package model
import (
"crypto/sha256"
"fmt"
"io"
"reflect"
@@ -8,19 +9,22 @@ import (
"strconv"
"strings"
"github.com/nektos/act/pkg/common"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/common"
)
// Workflow is the structure of the files in .github/workflows
type Workflow struct {
File string
Name string `yaml:"name"`
RawOn yaml.Node `yaml:"on"`
Env map[string]string `yaml:"env"`
Jobs map[string]*Job `yaml:"jobs"`
Defaults Defaults `yaml:"defaults"`
File string
Name string `yaml:"name"`
RawOn yaml.Node `yaml:"on"`
Env map[string]string `yaml:"env"`
Jobs map[string]*Job `yaml:"jobs"`
Defaults Defaults `yaml:"defaults"`
RawConcurrency *RawConcurrency `yaml:"concurrency"`
RawPermissions yaml.Node `yaml:"permissions"`
}
// On events for the workflow
@@ -66,6 +70,30 @@ func (w *Workflow) OnEvent(event string) interface{} {
return nil
}
func (w *Workflow) OnSchedule() []string {
schedules := w.OnEvent("schedule")
if schedules == nil {
return []string{}
}
switch val := schedules.(type) {
case []interface{}:
allSchedules := []string{}
for _, v := range val {
for k, cron := range v.(map[string]interface{}) {
if k != "cron" {
continue
}
allSchedules = append(allSchedules, cron.(string))
}
}
return allSchedules
default:
}
return []string{}
}
type WorkflowDispatchInput struct {
Description string `yaml:"description"`
Required bool `yaml:"required"`
@@ -173,6 +201,7 @@ type Job struct {
Uses string `yaml:"uses"`
With map[string]interface{} `yaml:"with"`
RawSecrets yaml.Node `yaml:"secrets"`
RawPermissions yaml.Node `yaml:"permissions"`
Result string
}
@@ -549,10 +578,14 @@ type ContainerSpec struct {
Args string
Name string
Reuse bool
// Gitea specific
Cmd []string `yaml:"cmd"`
}
// Step is the structure of one step in a job
type Step struct {
Number int `yaml:"-"`
ID string `yaml:"id"`
If yaml.Node `yaml:"if"`
Name string `yaml:"name"`
@@ -599,7 +632,7 @@ func (s *Step) GetEnv() map[string]string {
func (s *Step) ShellCommand() string {
shellCommand := ""
//Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
// Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
switch s.Shell {
case "", "bash":
shellCommand = "bash --noprofile --norc -e -o pipefail {0}"
@@ -688,6 +721,12 @@ func (s *Step) Type() StepType {
return StepTypeUsesActionRemote
}
// UsesHash returns a hash of the uses string.
// For Gitea.
func (s *Step) UsesHash() string {
return fmt.Sprintf("%x", sha256.Sum256([]byte(s.Uses)))
}
// ReadWorkflow returns a list of jobs for a given workflow file reader
func ReadWorkflow(in io.Reader) (*Workflow, error) {
w := new(Workflow)
@@ -733,3 +772,10 @@ func decodeNode(node yaml.Node, out interface{}) bool {
}
return true
}
// For Gitea
// RawConcurrency represents a workflow concurrency or a job concurrency with uninterpolated options
type RawConcurrency struct {
Group string `yaml:"group,omitempty"`
CancelInProgress string `yaml:"cancel-in-progress,omitempty"`
}

View File

@@ -7,6 +7,88 @@ import (
"github.com/stretchr/testify/assert"
)
func TestReadWorkflow_ScheduleEvent(t *testing.T) {
yaml := `
name: local-action-docker-url
on:
schedule:
- cron: '30 5 * * 1,3'
- cron: '30 5 * * 2,4'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err := ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
schedules := workflow.OnEvent("schedule")
assert.Len(t, schedules, 2)
newSchedules := workflow.OnSchedule()
assert.Len(t, newSchedules, 2)
assert.Equal(t, "30 5 * * 1,3", newSchedules[0])
assert.Equal(t, "30 5 * * 2,4", newSchedules[1])
yaml = `
name: local-action-docker-url
on:
schedule:
test: '30 5 * * 1,3'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
yaml = `
name: local-action-docker-url
on:
schedule:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
yaml = `
name: local-action-docker-url
on: [push, tag]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: ./actions/docker-url
`
workflow, err = ReadWorkflow(strings.NewReader(yaml))
assert.NoError(t, err, "read workflow should succeed")
newSchedules = workflow.OnSchedule()
assert.Len(t, newSchedules, 0)
}
func TestReadWorkflow_StringEvent(t *testing.T) {
yaml := `
name: local-action-docker-url
@@ -521,3 +603,37 @@ func TestReadWorkflow_WorkflowDispatchConfig(t *testing.T) {
Type: "choice",
}, workflowDispatch.Inputs["logLevel"])
}
func TestStep_UsesHash(t *testing.T) {
type fields struct {
Uses string
}
tests := []struct {
name string
fields fields
want string
}{
{
name: "regular",
fields: fields{
Uses: "https://gitea.com/testa/testb@v3",
},
want: "ae437878e9f285bd7518c58664f9fabbb12d05feddd7169c01702a2a14322aa8",
},
{
name: "empty",
fields: fields{
Uses: "",
},
want: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Step{
Uses: tt.fields.Uses,
}
assert.Equalf(t, tt.want, s.UsesHash(), "UsesHash()")
})
}
}

View File

@@ -112,7 +112,8 @@ func readActionImpl(ctx context.Context, step *model.Step, actionDir string, act
defer closer.Close()
action, err := model.ReadAction(reader)
logger.Debugf("Read action %v from '%s'", action, "Unknown")
// For Gitea, reduce log noise
// logger.Debugf("Read action %v from '%s'", action, "Unknown")
return action, err
}
@@ -162,7 +163,8 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
}
action := step.getActionModel()
logger.Debugf("About to run action %v", action)
// For Gitea, reduce log noise
// logger.Debugf("About to run action %v", action)
err := setupActionEnv(ctx, step, remoteAction)
if err != nil {
@@ -197,6 +199,21 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
}
return execAsComposite(step)(ctx)
case model.ActionRunsUsingGo:
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
return err
}
rc.ApplyExtraPath(ctx, step.getEnv())
execFileName := fmt.Sprintf("%s.out", action.Runs.Main)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Main}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return fmt.Errorf(fmt.Sprintf("The runs.using key must be one of: %v, got %s", []string{
model.ActionRunsUsingDocker,
@@ -204,6 +221,7 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
model.ActionRunsUsingNode16,
model.ActionRunsUsingNode20,
model.ActionRunsUsingComposite,
model.ActionRunsUsingGo,
}, action.Runs.Using))
}
}
@@ -399,23 +417,25 @@ func newStepContainer(ctx context.Context, step step, image string, cmd []string
networkMode = "default"
}
stepContainer := container.NewContainer(&container.NewContainerInput{
Cmd: cmd,
Entrypoint: entrypoint,
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createContainerName(rc.jobContainerName(), stepModel.ID),
Env: envList,
Mounts: mounts,
NetworkMode: networkMode,
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.Config.ContainerOptions,
Cmd: cmd,
Entrypoint: entrypoint,
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+stepModel.ID),
Env: envList,
Mounts: mounts,
NetworkMode: networkMode,
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.Config.ContainerOptions,
AutoRemove: rc.Config.AutoRemove,
ValidVolumes: rc.Config.ValidVolumes,
})
return stepContainer
}
@@ -491,7 +511,8 @@ func hasPreStep(step actionStep) common.Conditional {
return action.Runs.Using == model.ActionRunsUsingComposite ||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
action.Runs.Using == model.ActionRunsUsingNode16 ||
action.Runs.Using == model.ActionRunsUsingNode20) &&
action.Runs.Using == model.ActionRunsUsingNode20 ||
action.Runs.Using == model.ActionRunsUsingGo) &&
action.Runs.Pre != "")
}
}
@@ -514,7 +535,7 @@ func runPreStep(step actionStep) common.Executor {
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
@@ -550,6 +571,43 @@ func runPreStep(step actionStep) common.Executor {
}
return fmt.Errorf("missing steps in composite action")
case model.ActionRunsUsingGo:
// defaults in pre steps were missing, however provided inputs are available
populateEnvsFromInput(ctx, step.getEnv(), action, rc)
// todo: refactor into step
var actionDir string
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
}
actionLocation := ""
if actionPath != "" {
actionLocation = path.Join(actionDir, actionPath)
} else {
actionLocation = actionDir
}
_, containerActionDir := getContainerActionPaths(stepModel, actionLocation, rc)
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
return err
}
rc.ApplyExtraPath(ctx, step.getEnv())
execFileName := fmt.Sprintf("%s.out", action.Runs.Pre)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Pre}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return nil
}
@@ -587,7 +645,8 @@ func hasPostStep(step actionStep) common.Conditional {
return action.Runs.Using == model.ActionRunsUsingComposite ||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
action.Runs.Using == model.ActionRunsUsingNode16 ||
action.Runs.Using == model.ActionRunsUsingNode20) &&
action.Runs.Using == model.ActionRunsUsingNode20 ||
action.Runs.Using == model.ActionRunsUsingGo) &&
action.Runs.Post != "")
}
}
@@ -606,7 +665,7 @@ func runPostStep(step actionStep) common.Executor {
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
@@ -643,6 +702,19 @@ func runPostStep(step actionStep) common.Executor {
}
return fmt.Errorf("missing steps in composite action")
case model.ActionRunsUsingGo:
populateEnvsFromSavedState(step.getEnv(), step, rc)
rc.ApplyExtraPath(ctx, step.getEnv())
execFileName := fmt.Sprintf("%s.out", action.Runs.Post)
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Post}
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
return common.NewPipelineExecutor(
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
)(ctx)
default:
return nil
}

View File

@@ -27,7 +27,7 @@ func evaluateCompositeInputAndEnv(ctx context.Context, parent *RunContext, step
envKey := regexp.MustCompile("[^A-Z0-9-]").ReplaceAllString(strings.ToUpper(inputID), "_")
envKey = fmt.Sprintf("INPUT_%s", strings.ToUpper(envKey))
// lookup if key is defined in the step but the the already
// lookup if key is defined in the step but the already
// evaluated value from the environment
_, defined := step.getStepModel().With[inputID]
if value, ok := stepEnv[envKey]; defined && ok {
@@ -140,6 +140,7 @@ func (rc *RunContext) compositeExecutor(action *model.Action) *compositeSteps {
if step.ID == "" {
step.ID = fmt.Sprintf("%d", i)
}
step.Number = i
// create a copy of the step, since this composite action could
// run multiple times and we might modify the instance

View File

@@ -9,6 +9,7 @@ import (
)
var commandPatternGA *regexp.Regexp
var commandPatternADO *regexp.Regexp
func init() {
@@ -41,7 +42,9 @@ func (rc *RunContext) commandHandler(ctx context.Context) common.LineHandler {
}
if resumeCommand != "" && command != resumeCommand {
logger.Infof(" \U00002699 %s", line)
// There should not be any emojis in the log output for Gitea.
// The code in the switch statement is the same.
logger.Infof("%s", line)
return false
}
arg = unescapeCommandData(arg)
@@ -54,36 +57,37 @@ func (rc *RunContext) commandHandler(ctx context.Context) common.LineHandler {
case "add-path":
rc.addPath(ctx, arg)
case "debug":
logger.Infof(" \U0001F4AC %s", line)
logger.Infof("%s", line)
case "warning":
logger.Infof(" \U0001F6A7 %s", line)
logger.Infof("%s", line)
case "error":
logger.Infof(" \U00002757 %s", line)
logger.Infof("%s", line)
case "add-mask":
rc.AddMask(arg)
logger.Infof(" \U00002699 %s", "***")
logger.Infof("%s", "***")
case "stop-commands":
resumeCommand = arg
logger.Infof(" \U00002699 %s", line)
logger.Infof("%s", line)
case resumeCommand:
resumeCommand = ""
logger.Infof(" \U00002699 %s", line)
logger.Infof("%s", line)
case "save-state":
logger.Infof(" \U0001f4be %s", line)
logger.Infof("%s", line)
rc.saveState(ctx, kvPairs, arg)
case "add-matcher":
logger.Infof(" \U00002753 add-matcher %s", arg)
logger.Infof("%s", line)
default:
logger.Infof(" \U00002753 %s", line)
logger.Infof("%s", line)
}
return false
// return true to let gitea's logger handle these special outputs also
return true
}
}
func (rc *RunContext) setEnv(ctx context.Context, kvPairs map[string]string, arg string) {
name := kvPairs["name"]
common.Logger(ctx).Infof(" \U00002699 ::set-env:: %s=%s", name, arg)
common.Logger(ctx).Infof("::set-env:: %s=%s", name, arg)
if rc.Env == nil {
rc.Env = make(map[string]string)
}
@@ -100,6 +104,7 @@ func (rc *RunContext) setEnv(ctx context.Context, kvPairs map[string]string, arg
mergeIntoMap(rc.Env, newenv)
mergeIntoMap(rc.GlobalEnv, newenv)
}
func (rc *RunContext) setOutput(ctx context.Context, kvPairs map[string]string, arg string) {
logger := common.Logger(ctx)
stepID := rc.CurrentStep
@@ -115,11 +120,12 @@ func (rc *RunContext) setOutput(ctx context.Context, kvPairs map[string]string,
return
}
logger.Infof(" \U00002699 ::set-output:: %s=%s", outputName, arg)
logger.Infof("::set-output:: %s=%s", outputName, arg)
result.Outputs[outputName] = arg
}
func (rc *RunContext) addPath(ctx context.Context, arg string) {
common.Logger(ctx).Infof(" \U00002699 ::add-path:: %s", arg)
common.Logger(ctx).Infof("::add-path:: %s", arg)
extraPath := []string{arg}
for _, v := range rc.ExtraPath {
if v != arg {
@@ -140,6 +146,7 @@ func parseKeyValuePairs(kvPairs string, separator string) map[string]string {
}
return rtn
}
func unescapeCommandData(arg string) string {
escapeMap := map[string]string{
"%25": "%",
@@ -151,6 +158,7 @@ func unescapeCommandData(arg string) string {
}
return arg
}
func unescapeCommandProperty(arg string) string {
escapeMap := map[string]string{
"%25": "%",
@@ -164,6 +172,7 @@ func unescapeCommandProperty(arg string) string {
}
return arg
}
func unescapeKvPairs(kvPairs map[string]string) map[string]string {
for k, v := range kvPairs {
kvPairs[k] = unescapeCommandProperty(v)

View File

@@ -63,6 +63,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
if stepModel.ID == "" {
stepModel.ID = fmt.Sprintf("%d", i)
}
stepModel.Number = i
step, err := sf.newStep(stepModel, rc)
@@ -70,7 +71,19 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
return common.NewErrorExecutor(err)
}
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, step.pre()))
preExec := step.pre()
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, func(ctx context.Context) error {
logger := common.Logger(ctx)
preErr := preExec(ctx)
if preErr != nil {
logger.Errorf("%v", preErr)
common.SetJobError(ctx, preErr)
} else if ctx.Err() != nil {
logger.Errorf("%v", ctx.Err())
common.SetJobError(ctx, ctx.Err())
}
return preErr
}))
stepExec := step.main()
steps = append(steps, useStepLogger(rc, stepModel, stepStageMain, func(ctx context.Context) error {
@@ -104,10 +117,31 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
defer cancel()
logger := common.Logger(ctx)
// For Gitea
// We don't need to call `stopServiceContainers` here since it will be called by following `info.stopContainer`
// logger.Infof("Cleaning up services for job %s", rc.JobName)
// if err := rc.stopServiceContainers()(ctx); err != nil {
// logger.Errorf("Error while cleaning services: %v", err)
// }
logger.Infof("Cleaning up container for job %s", rc.JobName)
if err = info.stopContainer()(ctx); err != nil {
logger.Errorf("Error while stop job container: %v", err)
}
// For Gitea
// We don't need to call `NewDockerNetworkRemoveExecutor` here since it is called by above `info.stopContainer`
// if !rc.IsHostEnv(ctx) && rc.Config.ContainerNetworkMode == "" {
// // clean network in docker mode only
// // if the value of `ContainerNetworkMode` is empty string,
// // it means that the network to which containers are connecting is created by `act_runner`,
// // so, we should remove the network at last.
// networkName, _ := rc.networkName()
// logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
// if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
// logger.Errorf("Error while cleaning network: %v", err)
// }
// }
}
setJobResult(ctx, info, rc, jobError == nil)
setJobOutputs(ctx, rc)
@@ -151,7 +185,8 @@ func setJobResult(ctx context.Context, info jobInfo, rc *RunContext, success boo
info.result(jobResult)
if rc.caller != nil {
// set reusable workflow job result
rc.caller.runContext.result(jobResult)
rc.caller.setReusedWorkflowJobResult(rc.JobName, jobResult) // For Gitea
return
}
jobResultMessage := "succeeded"
@@ -179,7 +214,7 @@ func setJobOutputs(ctx context.Context, rc *RunContext) {
func useStepLogger(rc *RunContext, stepModel *model.Step, stage stepStage, executor common.Executor) common.Executor {
return func(ctx context.Context) error {
ctx = withStepLogger(ctx, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
ctx = withStepLogger(ctx, stepModel.Number, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
rawLogger := common.Logger(ctx).WithField("raw_output", true)
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {

View File

@@ -96,6 +96,17 @@ func WithJobLogger(ctx context.Context, jobID string, jobName string, config *Co
logger.SetFormatter(formatter)
}
{ // Adapt to Gitea
if hook := common.LoggerHook(ctx); hook != nil {
logger.AddHook(hook)
}
if config.JobLoggerLevel != nil {
logger.SetLevel(*config.JobLoggerLevel)
} else {
logger.SetLevel(logrus.TraceLevel)
}
}
logger.SetFormatter(&maskedFormatter{
Formatter: logger.Formatter,
masker: valueMasker(config.InsecureSecrets, config.Secrets),
@@ -132,11 +143,12 @@ func WithCompositeStepLogger(ctx context.Context, stepID string) context.Context
}).WithContext(ctx))
}
func withStepLogger(ctx context.Context, stepID string, stepName string, stageName string) context.Context {
func withStepLogger(ctx context.Context, stepNumber int, stepID, stepName, stageName string) context.Context {
rtn := common.Logger(ctx).WithFields(logrus.Fields{
"step": stepName,
"stepID": []string{stepID},
"stage": stageName,
"stepNumber": stepNumber,
"step": stepName,
"stepID": []string{stepID},
"stage": stageName,
})
return common.WithLogger(ctx, rtn)
}

View File

@@ -9,6 +9,7 @@ import (
"os"
"path"
"regexp"
"strings"
"sync"
"github.com/nektos/act/pkg/common"
@@ -17,15 +18,53 @@ import (
)
func newLocalReusableWorkflowExecutor(rc *RunContext) common.Executor {
return newReusableWorkflowExecutor(rc, rc.Config.Workdir, rc.Run.Job().Uses)
if !rc.Config.NoSkipCheckout {
fullPath := rc.Run.Job().Uses
fileName := path.Base(fullPath)
workflowDir := strings.TrimSuffix(fullPath, path.Join("/", fileName))
workflowDir = strings.TrimPrefix(workflowDir, "./")
return common.NewPipelineExecutor(
newReusableWorkflowExecutor(rc, workflowDir, fileName),
)
}
// ./.gitea/workflows/wf.yml -> .gitea/workflows/wf.yml
trimmedUses := strings.TrimPrefix(rc.Run.Job().Uses, "./")
// uses string format is {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}
uses := fmt.Sprintf("%s/%s@%s", rc.Config.PresetGitHubContext.Repository, trimmedUses, rc.Config.PresetGitHubContext.Sha)
remoteReusableWorkflow := newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
workflowDir := fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(uses))
// If the repository is private, we need a token to clone it
token := rc.Config.GetToken()
return common.NewPipelineExecutor(
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
)
}
func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
uses := rc.Run.Job().Uses
remoteReusableWorkflow := newRemoteReusableWorkflow(uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.github/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
var remoteReusableWorkflow *remoteReusableWorkflow
if strings.HasPrefix(uses, "http://") || strings.HasPrefix(uses, "https://") {
remoteReusableWorkflow = newRemoteReusableWorkflowFromAbsoluteURL(uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format http(s)://{domain}/{owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
} else {
remoteReusableWorkflow = newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
}
// uses with safe filename makes the target directory look something like this {owner}-{repo}-.github-workflows-{filename}@{ref}
@@ -38,9 +77,12 @@ func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
return newActionCacheReusableWorkflowExecutor(rc, filename, remoteReusableWorkflow)
}
// FIXME: if the reusable workflow is from a private repository, we need to provide a token to access the repository.
token := ""
return common.NewPipelineExecutor(
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir)),
newReusableWorkflowExecutor(rc, workflowDir, fmt.Sprintf("./.github/workflows/%s", remoteReusableWorkflow.Filename)),
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
)
}
@@ -92,7 +134,7 @@ func newMutexExecutor(executor common.Executor) common.Executor {
}
}
func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkflow, targetDirectory string) common.Executor {
func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkflow, targetDirectory, token string) common.Executor {
return common.NewConditionalExecutor(
func(ctx context.Context) bool {
_, err := os.Stat(targetDirectory)
@@ -100,12 +142,15 @@ func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkfl
return notExists
},
func(ctx context.Context) error {
remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
// Do not change the remoteReusableWorkflow.URL, because:
// 1. Gitea doesn't support specifying GithubContext.ServerURL by the GITHUB_SERVER_URL env
// 2. Gitea has already full URL with rc.Config.GitHubInstance when calling newRemoteReusableWorkflowWithPlat
// remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
return git.NewGitCloneExecutor(git.NewGitCloneExecutorInput{
URL: remoteReusableWorkflow.CloneURL(),
Ref: remoteReusableWorkflow.Ref,
Dir: targetDirectory,
Token: rc.Config.Token,
Token: token,
OfflineMode: rc.Config.ActionOfflineMode,
})(ctx)
},
@@ -130,7 +175,11 @@ func newReusableWorkflowExecutor(rc *RunContext, directory string, workflow stri
return err
}
return runner.NewPlanExecutor(plan)(ctx)
// return runner.NewPlanExecutor(plan)(ctx)
return common.NewPipelineExecutor( // For Gitea
runner.NewPlanExecutor(plan),
setReusedWorkflowCallerResult(rc, runner),
)(ctx)
}
}
@@ -140,6 +189,8 @@ func NewReusableWorkflowRunner(rc *RunContext) (Runner, error) {
eventJSON: rc.EventJSON,
caller: &caller{
runContext: rc,
reusedWorkflowJobResults: map[string]string{}, // For Gitea
},
}
@@ -152,12 +203,62 @@ type remoteReusableWorkflow struct {
Repo string
Filename string
Ref string
GitPlatform string
}
func (r *remoteReusableWorkflow) CloneURL() string {
return fmt.Sprintf("%s/%s/%s", r.URL, r.Org, r.Repo)
// In Gitea, r.URL always has the protocol prefix, we don't need to add extra prefix in this case.
if strings.HasPrefix(r.URL, "http://") || strings.HasPrefix(r.URL, "https://") {
return fmt.Sprintf("%s/%s/%s", r.URL, r.Org, r.Repo)
}
return fmt.Sprintf("https://%s/%s/%s", r.URL, r.Org, r.Repo)
}
func (r *remoteReusableWorkflow) FilePath() string {
return fmt.Sprintf("./.%s/workflows/%s", r.GitPlatform, r.Filename)
}
// For Gitea
// newRemoteReusableWorkflowWithPlat create a `remoteReusableWorkflow`
// workflows from `.gitea/workflows` and `.github/workflows` are supported
func newRemoteReusableWorkflowWithPlat(url, uses string) *remoteReusableWorkflow {
// GitHub docs:
// https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses
r := regexp.MustCompile(`^([^/]+)/([^/]+)/\.([^/]+)/workflows/([^@]+)@(.*)$`)
matches := r.FindStringSubmatch(uses)
if len(matches) != 6 {
return nil
}
return &remoteReusableWorkflow{
Org: matches[1],
Repo: matches[2],
GitPlatform: matches[3],
Filename: matches[4],
Ref: matches[5],
URL: url,
}
}
// For Gitea
// newRemoteReusableWorkflowWithPlat create a `remoteReusableWorkflow` from an absolute url
func newRemoteReusableWorkflowFromAbsoluteURL(uses string) *remoteReusableWorkflow {
r := regexp.MustCompile(`^(https?://.*)/([^/]+)/([^/]+)/\.([^/]+)/workflows/([^@]+)@(.*)$`)
matches := r.FindStringSubmatch(uses)
if len(matches) != 7 {
return nil
}
return &remoteReusableWorkflow{
URL: matches[1],
Org: matches[2],
Repo: matches[3],
GitPlatform: matches[4],
Filename: matches[5],
Ref: matches[6],
}
}
// deprecated: use newRemoteReusableWorkflowWithPlat
func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
// GitHub docs:
// https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses
@@ -174,3 +275,47 @@ func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
URL: "https://github.com",
}
}
// For Gitea
func setReusedWorkflowCallerResult(rc *RunContext, runner Runner) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
runnerImpl, ok := runner.(*runnerImpl)
if !ok {
logger.Warn("Failed to get caller from runner")
return nil
}
caller := runnerImpl.caller
allJobDone := true
hasFailure := false
for _, result := range caller.reusedWorkflowJobResults {
if result == "pending" {
allJobDone = false
break
}
if result == "failure" {
hasFailure = true
}
}
if allJobDone {
reusedWorkflowJobResult := "success"
reusedWorkflowJobResultMessage := "succeeded"
if hasFailure {
reusedWorkflowJobResult = "failure"
reusedWorkflowJobResultMessage = "failed"
}
if rc.caller != nil {
rc.caller.setReusedWorkflowJobResult(rc.JobName, reusedWorkflowJobResult)
} else {
rc.result(reusedWorkflowJobResult)
logger.WithField("jobResult", reusedWorkflowJobResult).Infof("\U0001F3C1 Job %s", reusedWorkflowJobResultMessage)
}
}
return nil
}
}

View File

@@ -16,13 +16,15 @@ import (
"regexp"
"runtime"
"strings"
"time"
"github.com/docker/go-connections/nat"
"github.com/opencontainers/selinux/go-selinux"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/container"
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
"github.com/opencontainers/selinux/go-selinux"
)
// RunContext contains info about current job
@@ -81,13 +83,24 @@ func (rc *RunContext) GetEnv() map[string]string {
}
}
rc.Env["ACT"] = "true"
if !rc.Config.NoSkipCheckout {
rc.Env["ACT_SKIP_CHECKOUT"] = "true"
}
return rc.Env
}
func (rc *RunContext) jobContainerName() string {
return createContainerName("act", rc.String())
nameParts := []string{rc.Config.ContainerNamePrefix, "WORKFLOW-" + rc.Run.Workflow.Name, "JOB-" + rc.Name}
if rc.caller != nil {
nameParts = append(nameParts, "CALLED-BY-"+rc.caller.runContext.JobName)
}
// return createSimpleContainerName(rc.Config.ContainerNamePrefix, "WORKFLOW-"+rc.Run.Workflow.Name, "JOB-"+rc.Name)
return createSimpleContainerName(nameParts...) // For Gitea
}
// Deprecated: use `networkNameForGitea`
// networkName return the name of the network which will be created by `act` automatically for job,
// only create network if using a service container
func (rc *RunContext) networkName() (string, bool) {
@@ -100,6 +113,14 @@ func (rc *RunContext) networkName() (string, bool) {
return string(rc.Config.ContainerNetworkMode), false
}
// networkNameForGitea return the name of the network
func (rc *RunContext) networkNameForGitea() (string, bool) {
if rc.Config.ContainerNetworkMode != "" {
return string(rc.Config.ContainerNetworkMode), false
}
return fmt.Sprintf("%s-%s-network", rc.jobContainerName(), rc.Run.JobID), true
}
func getDockerDaemonSocketMountPath(daemonPath string) string {
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
scheme := daemonPath[:protoIndex]
@@ -167,6 +188,14 @@ func (rc *RunContext) GetBindsAndMounts() ([]string, map[string]string) {
mounts[name] = ext.ToContainerPath(rc.Config.Workdir)
}
// For Gitea
// add some default binds and mounts to ValidVolumes
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, "act-toolcache")
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, name)
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, name+"-env")
// TODO: add a new configuration to control whether the docker daemon can be mounted
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket))
return binds, mounts
}
@@ -261,6 +290,9 @@ func (rc *RunContext) startJobContainer() common.Executor {
logger.Infof("\U0001f680 Start image=%s", image)
name := rc.jobContainerName()
// For gitea, to support --volumes-from <container_name_or_id> in options.
// We need to set the container name to the environment variable.
rc.Env["JOB_CONTAINER_NAME"] = name
envList := make([]string, 0)
@@ -276,7 +308,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
// specify the network to which the container will connect when `docker create` stage. (like execute command line: docker create --network <networkName> <image>)
// if using service containers, will create a new network for the containers.
// and it will be removed after at last.
networkName, createAndDeleteNetwork := rc.networkName()
networkName, createAndDeleteNetwork := rc.networkNameForGitea()
// add service containers
for serviceID, spec := range rc.Run.Job().Services {
@@ -289,6 +321,11 @@ func (rc *RunContext) startJobContainer() common.Executor {
for k, v := range interpolatedEnvs {
envs = append(envs, fmt.Sprintf("%s=%s", k, v))
}
// interpolate cmd
interpolatedCmd := make([]string, 0, len(spec.Cmd))
for _, v := range spec.Cmd {
interpolatedCmd = append(interpolatedCmd, rc.ExprEval.Interpolate(ctx, v))
}
username, password, err = rc.handleServiceCredentials(ctx, spec.Credentials)
if err != nil {
return fmt.Errorf("failed to handle service %s credentials: %w", serviceID, err)
@@ -316,6 +353,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
Image: rc.ExprEval.Interpolate(ctx, spec.Image),
Username: username,
Password: password,
Cmd: interpolatedCmd,
Env: envs,
Mounts: serviceMounts,
Binds: serviceBinds,
@@ -324,6 +362,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
AutoRemove: rc.Config.AutoRemove,
Options: rc.ExprEval.Interpolate(ctx, spec.Options),
NetworkMode: networkName,
NetworkAliases: []string{serviceID},
@@ -348,15 +387,15 @@ func (rc *RunContext) startJobContainer() common.Executor {
if err := rc.stopServiceContainers()(ctx); err != nil {
logger.Errorf("Error while cleaning services: %v", err)
}
if createAndDeleteNetwork {
// clean network if it has been created by act
// if using service containers
// it means that the network to which containers are connecting is created by `act_runner`,
// so, we should remove the network at last.
logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
logger.Errorf("Error while cleaning network: %v", err)
}
}
if createAndDeleteNetwork {
// clean network if it has been created by act
// if using service containers
// it means that the network to which containers are connecting is created by `act_runner`,
// so, we should remove the network at last.
logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
logger.Errorf("Error while cleaning network: %v", err)
}
}
return nil
@@ -372,9 +411,12 @@ func (rc *RunContext) startJobContainer() common.Executor {
jobContainerNetwork = "host"
}
// For Gitea, `jobContainerNetwork` should be the same as `networkName`
jobContainerNetwork = networkName
rc.JobContainer = container.NewContainer(&container.NewContainerInput{
Cmd: nil,
Entrypoint: []string{"tail", "-f", "/dev/null"},
Entrypoint: []string{"/bin/sleep", fmt.Sprint(rc.Config.ContainerMaxLifetime.Round(time.Second).Seconds())},
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: username,
@@ -391,6 +433,8 @@ func (rc *RunContext) startJobContainer() common.Executor {
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.options(ctx),
AutoRemove: rc.Config.AutoRemove,
ValidVolumes: rc.Config.ValidVolumes,
})
if rc.JobContainer == nil {
return errors.New("Failed to create job container")
@@ -614,6 +658,7 @@ func (rc *RunContext) Executor() (common.Executor, error) {
return func(ctx context.Context) error {
res, err := rc.isEnabled(ctx)
if err != nil {
rc.caller.setReusedWorkflowJobResult(rc.JobName, "failure") // For Gitea
return err
}
if res {
@@ -639,6 +684,18 @@ func (rc *RunContext) runsOnImage(ctx context.Context) string {
common.Logger(ctx).Errorf("'runs-on' key not defined in %s", rc.String())
}
job := rc.Run.Job()
runsOn := job.RunsOn()
for i, v := range runsOn {
runsOn[i] = rc.ExprEval.Interpolate(ctx, v)
}
if pick := rc.Config.PlatformPicker; pick != nil {
if image := pick(runsOn); image != "" {
return image
}
}
for _, platformName := range rc.runsOnPlatformNames(ctx) {
image := rc.Config.Platforms[strings.ToLower(platformName)]
if image != "" {
@@ -676,7 +733,7 @@ func (rc *RunContext) options(ctx context.Context) string {
job := rc.Run.Job()
c := job.Container()
if c != nil {
return rc.ExprEval.Interpolate(ctx, c.Options)
return rc.Config.ContainerOptions + " " + rc.ExprEval.Interpolate(ctx, c.Options)
}
return rc.Config.ContainerOptions
@@ -697,6 +754,10 @@ func (rc *RunContext) isEnabled(ctx context.Context) (bool, error) {
}
if !runJob {
if rc.caller != nil { // For Gitea
rc.caller.setReusedWorkflowJobResult(rc.JobName, "skipped")
return false, nil
}
l.WithField("jobResult", "skipped").Debugf("Skipping job '%s' due to '%s'", job.Name, job.If.Value)
return false, nil
}
@@ -725,6 +786,7 @@ func mergeMaps(maps ...map[string]string) map[string]string {
return rtnMap
}
// deprecated: use createSimpleContainerName
func createContainerName(parts ...string) string {
name := strings.Join(parts, "-")
pattern := regexp.MustCompile("[^a-zA-Z0-9]")
@@ -738,6 +800,22 @@ func createContainerName(parts ...string) string {
return fmt.Sprintf("%s-%x", trimmedName, hash)
}
func createSimpleContainerName(parts ...string) string {
pattern := regexp.MustCompile("[^a-zA-Z0-9-]")
name := make([]string, 0, len(parts))
for _, v := range parts {
v = pattern.ReplaceAllString(v, "-")
v = strings.Trim(v, "-")
for strings.Contains(v, "--") {
v = strings.ReplaceAll(v, "--", "-")
}
if v != "" {
name = append(name, v)
}
}
return strings.Join(name, "_")
}
func trimToLen(s string, l int) string {
if l < 0 {
l = 0
@@ -820,6 +898,36 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
ghc.Actor = "nektos/act"
}
{ // Adapt to Gitea
if preset := rc.Config.PresetGitHubContext; preset != nil {
ghc.Event = preset.Event
ghc.RunID = preset.RunID
ghc.RunNumber = preset.RunNumber
ghc.Actor = preset.Actor
ghc.Repository = preset.Repository
ghc.EventName = preset.EventName
ghc.Sha = preset.Sha
ghc.Ref = preset.Ref
ghc.RefName = preset.RefName
ghc.RefType = preset.RefType
ghc.HeadRef = preset.HeadRef
ghc.BaseRef = preset.BaseRef
ghc.Token = preset.Token
ghc.RepositoryOwner = preset.RepositoryOwner
ghc.RetentionDays = preset.RetentionDays
instance := rc.Config.GitHubInstance
if !strings.HasPrefix(instance, "http://") &&
!strings.HasPrefix(instance, "https://") {
instance = "https://" + instance
}
ghc.ServerURL = instance
ghc.APIURL = instance + "/api/v1" // the version of Gitea is v1
ghc.GraphQLURL = "" // Gitea doesn't support graphql
return ghc
}
}
if rc.EventJSON != "" {
err := json.Unmarshal([]byte(rc.EventJSON), &ghc.Event)
if err != nil {
@@ -849,6 +957,18 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
ghc.APIURL = fmt.Sprintf("https://%s/api/v3", rc.Config.GitHubInstance)
ghc.GraphQLURL = fmt.Sprintf("https://%s/api/graphql", rc.Config.GitHubInstance)
}
{ // Adapt to Gitea
instance := rc.Config.GitHubInstance
if !strings.HasPrefix(instance, "http://") &&
!strings.HasPrefix(instance, "https://") {
instance = "https://" + instance
}
ghc.ServerURL = instance
ghc.APIURL = instance + "/api/v1" // the version of Gitea is v1
ghc.GraphQLURL = "" // Gitea doesn't support graphql
}
// allow to be overridden by user
if rc.Config.Env["GITHUB_SERVER_URL"] != "" {
ghc.ServerURL = rc.Config.Env["GITHUB_SERVER_URL"]
@@ -936,6 +1056,17 @@ func (rc *RunContext) withGithubEnv(ctx context.Context, github *model.GithubCon
env["GITHUB_API_URL"] = github.APIURL
env["GITHUB_GRAPHQL_URL"] = github.GraphQLURL
{ // Adapt to Gitea
instance := rc.Config.GitHubInstance
if !strings.HasPrefix(instance, "http://") &&
!strings.HasPrefix(instance, "https://") {
instance = "https://" + instance
}
env["GITHUB_SERVER_URL"] = instance
env["GITHUB_API_URL"] = instance + "/api/v1" // the version of Gitea is v1
env["GITHUB_GRAPHQL_URL"] = "" // Gitea doesn't support graphql
}
if rc.Config.ArtifactServerPath != "" {
setActionRuntimeVars(rc, env)
}

View File

@@ -682,3 +682,24 @@ func TestRunContextGetEnv(t *testing.T) {
})
}
}
func Test_createSimpleContainerName(t *testing.T) {
tests := []struct {
parts []string
want string
}{
{
parts: []string{"a--a", "BB正", "c-C"},
want: "a-a_BB_c-C",
},
{
parts: []string{"a-a", "", "-"},
want: "a-a",
},
}
for _, tt := range tests {
t.Run(strings.Join(tt.parts, " "), func(t *testing.T) {
assert.Equalf(t, tt.want, createSimpleContainerName(tt.parts...), "createSimpleContainerName(%v)", tt.parts)
})
}
}

View File

@@ -6,11 +6,14 @@ import (
"fmt"
"os"
"runtime"
"sync"
"time"
docker_container "github.com/docker/docker/api/types/container"
log "github.com/sirupsen/logrus"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/model"
log "github.com/sirupsen/logrus"
)
// Runner provides capabilities to run GitHub actions
@@ -61,10 +64,32 @@ type Config struct {
Matrix map[string]map[string]bool // Matrix config to run
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
ActionCache ActionCache // Use a custom ActionCache Implementation
PresetGitHubContext *model.GithubContext // the preset github context, overrides some fields like DefaultBranch, Env, Secrets etc.
EventJSON string // the content of JSON file to use for event.json in containers, overrides EventPath
ContainerNamePrefix string // the prefix of container name
ContainerMaxLifetime time.Duration // the max lifetime of job containers
DefaultActionInstance string // the default actions web site
PlatformPicker func(labels []string) string // platform picker, it will take precedence over Platforms if isn't nil
JobLoggerLevel *log.Level // the level of job logger
ValidVolumes []string // only volumes (and bind mounts) in this slice can be mounted on the job container or service containers
InsecureSkipTLS bool // whether to skip verifying TLS certificate of the Gitea instance
}
// GetToken: Adapt to Gitea
func (c Config) GetToken() string {
token := c.Secrets["GITHUB_TOKEN"]
if c.Secrets["GITEA_TOKEN"] != "" {
token = c.Secrets["GITEA_TOKEN"]
}
return token
}
type caller struct {
runContext *RunContext
updateResultLock sync.Mutex // For Gitea
reusedWorkflowJobResults map[string]string // For Gitea
}
type runnerImpl struct {
@@ -84,7 +109,9 @@ func New(runnerConfig *Config) (Runner, error) {
func (runner *runnerImpl) configure() (Runner, error) {
runner.eventJSON = "{}"
if runner.config.EventPath != "" {
if runner.config.EventJSON != "" {
runner.eventJSON = runner.config.EventJSON
} else if runner.config.EventPath != "" {
log.Debugf("Reading event.json from %s", runner.config.EventPath)
eventJSONBytes, err := os.ReadFile(runner.config.EventPath)
if err != nil {
@@ -183,6 +210,9 @@ func (runner *runnerImpl) NewPlanExecutor(plan *model.Plan) common.Executor {
if len(rc.String()) > maxJobNameLen {
maxJobNameLen = len(rc.String())
}
if rc.caller != nil { // For Gitea
rc.caller.setReusedWorkflowJobResult(rc.JobName, "pending")
}
stageExecutor = append(stageExecutor, func(ctx context.Context) error {
jobName := fmt.Sprintf("%-*s", maxJobNameLen, rc.String())
executor, err := rc.Executor()
@@ -254,3 +284,10 @@ func (runner *runnerImpl) newRunContext(ctx context.Context, run *model.Run, mat
return rc
}
// For Gitea
func (c *caller) setReusedWorkflowJobResult(jobName string, result string) {
c.updateResultLock.Lock()
defer c.updateResultLock.Unlock()
c.reusedWorkflowJobResults[jobName] = result
}

View File

@@ -580,6 +580,43 @@ func TestRunEventSecrets(t *testing.T) {
tjfi.runTest(context.Background(), t, &Config{Secrets: secrets, Env: env})
}
func TestRunWithService(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
log.SetLevel(log.DebugLevel)
ctx := context.Background()
platforms := map[string]string{
"ubuntu-latest": "node:12.20.1-buster-slim",
}
workflowPath := "services"
eventName := "push"
workdir, err := filepath.Abs("testdata")
assert.NoError(t, err, workflowPath)
runnerConfig := &Config{
Workdir: workdir,
EventName: eventName,
Platforms: platforms,
ReuseContainers: false,
}
runner, err := New(runnerConfig)
assert.NoError(t, err, workflowPath)
planner, err := model.NewWorkflowPlanner(fmt.Sprintf("testdata/%s", workflowPath), true)
assert.NoError(t, err, workflowPath)
plan, err := planner.PlanEvent(eventName)
assert.NoError(t, err, workflowPath)
err = runner.NewPlanExecutor(plan)(ctx)
assert.NoError(t, err, workflowPath)
}
func TestRunActionInputs(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")

View File

@@ -122,6 +122,15 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
summaryFileCommand := path.Join("workflow", "SUMMARY.md")
(*step.getEnv())["GITHUB_STEP_SUMMARY"] = path.Join(actPath, summaryFileCommand)
{
// For Gitea
(*step.getEnv())["GITEA_OUTPUT"] = (*step.getEnv())["GITHUB_OUTPUT"]
(*step.getEnv())["GITEA_STATE"] = (*step.getEnv())["GITHUB_STATE"]
(*step.getEnv())["GITEA_PATH"] = (*step.getEnv())["GITHUB_PATH"]
(*step.getEnv())["GITEA_ENV"] = (*step.getEnv())["GITHUB_ENV"]
(*step.getEnv())["GITEA_STEP_SUMMARY"] = (*step.getEnv())["GITHUB_STEP_SUMMARY"]
}
_ = rc.JobContainer.Copy(actPath, &container.FileEntry{
Name: outputFileCommand,
Mode: 0o666,
@@ -221,7 +230,8 @@ func setupEnv(ctx context.Context, step step) error {
}
}
common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
return nil
}

View File

@@ -33,9 +33,7 @@ type stepActionRemote struct {
resolvedSha string
}
var (
stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
)
var stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return func(ctx context.Context) error {
@@ -44,14 +42,18 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return nil
}
// For gitea:
// Since actions can specify the download source via a url prefix.
// The prefix may contain some sensitive information that needs to be stored in secrets,
// so we need to interpolate the expression value for uses first.
sar.Step.Uses = sar.RunContext.NewExpressionEvaluator(ctx).Interpolate(ctx, sar.Step.Uses)
sar.remoteAction = newRemoteAction(sar.Step.Uses)
if sar.remoteAction == nil {
return fmt.Errorf("Expected format {org}/{repo}[/path]@ref. Actual '%s' Input string was not in a correct format", sar.Step.Uses)
}
github := sar.getGithubContext(ctx)
sar.remoteAction.URL = github.ServerURL
if sar.remoteAction.IsCheckout() && isLocalCheckout(github, sar.Step) && !sar.RunContext.Config.NoSkipCheckout {
common.Logger(ctx).Debugf("Skipping local actions/checkout because workdir was already copied")
return nil
@@ -106,13 +108,21 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return err
}
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
gitClone := stepActionRemoteNewCloneExecutor(git.NewGitCloneExecutorInput{
URL: sar.remoteAction.CloneURL(),
Ref: sar.remoteAction.Ref,
Dir: actionDir,
Token: github.Token,
URL: sar.remoteAction.CloneURL(sar.RunContext.Config.DefaultActionInstance),
Ref: sar.remoteAction.Ref,
Dir: actionDir,
Token: "", /*
Shouldn't provide token when cloning actions,
the token comes from the instance which triggered the task,
however, it might be not the same instance which provides actions.
For GitHub, they are the same, always github.com.
But for Gitea, tasks triggered by a.com can clone actions from b.com.
*/
OfflineMode: sar.RunContext.Config.ActionOfflineMode,
InsecureSkipTLS: sar.cloneSkipTLS(), // For Gitea
})
var ntErr common.Executor
if err := gitClone(ctx); err != nil {
@@ -167,7 +177,7 @@ func (sar *stepActionRemote) main() common.Executor {
return sar.RunContext.JobContainer.CopyDir(copyToPath, sar.RunContext.Config.Workdir+string(filepath.Separator)+".", sar.RunContext.Config.UseGitIgnore)(ctx)
}
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
return sar.runAction(sar, actionDir, sar.remoteAction)(ctx)
}),
@@ -226,7 +236,7 @@ func (sar *stepActionRemote) getActionModel() *model.Action {
func (sar *stepActionRemote) getCompositeRunContext(ctx context.Context) *RunContext {
if sar.compositeRunContext == nil {
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
actionLocation := path.Join(actionDir, sar.remoteAction.Path)
_, containerActionDir := getContainerActionPaths(sar.getStepModel(), actionLocation, sar.RunContext)
@@ -250,6 +260,22 @@ func (sar *stepActionRemote) getCompositeSteps() *compositeSteps {
return sar.compositeSteps
}
// For Gitea
// cloneSkipTLS returns true if the runner can clone an action from the Gitea instance
func (sar *stepActionRemote) cloneSkipTLS() bool {
if !sar.RunContext.Config.InsecureSkipTLS {
// Return false if the Gitea instance is not an insecure instance
return false
}
if sar.remoteAction.URL == "" {
// Empty URL means the default action instance should be used
// Return true if the URL of the Gitea instance is the same as the URL of the default action instance
return sar.RunContext.Config.DefaultActionInstance == sar.RunContext.Config.GitHubInstance
}
// Return true if the URL of the remote action is the same as the URL of the Gitea instance
return sar.remoteAction.URL == sar.RunContext.Config.GitHubInstance
}
type remoteAction struct {
URL string
Org string
@@ -258,8 +284,16 @@ type remoteAction struct {
Ref string
}
func (ra *remoteAction) CloneURL() string {
return fmt.Sprintf("%s/%s/%s", ra.URL, ra.Org, ra.Repo)
func (ra *remoteAction) CloneURL(u string) string {
if ra.URL == "" {
if !strings.HasPrefix(u, "http://") && !strings.HasPrefix(u, "https://") {
u = "https://" + u
}
} else {
u = ra.URL
}
return fmt.Sprintf("%s/%s/%s", u, ra.Org, ra.Repo)
}
func (ra *remoteAction) IsCheckout() bool {
@@ -270,6 +304,26 @@ func (ra *remoteAction) IsCheckout() bool {
}
func newRemoteAction(action string) *remoteAction {
// support http(s)://host/owner/repo@v3
for _, schema := range []string{"https://", "http://"} {
if strings.HasPrefix(action, schema) {
splits := strings.SplitN(strings.TrimPrefix(action, schema), "/", 2)
if len(splits) != 2 {
return nil
}
ret := parseAction(splits[1])
if ret == nil {
return nil
}
ret.URL = schema + splits[0]
return ret
}
}
return parseAction(action)
}
func parseAction(action string) *remoteAction {
// GitHub's document[^] describes:
// > We strongly recommend that you include the version of
// > the action you are using by specifying a Git ref, SHA, or Docker tag number.
@@ -285,7 +339,7 @@ func newRemoteAction(action string) *remoteAction {
Repo: matches[2],
Path: matches[4],
Ref: matches[6],
URL: "https://github.com",
URL: "",
}
}

View File

@@ -616,6 +616,100 @@ func TestStepActionRemotePost(t *testing.T) {
}
}
func Test_newRemoteAction(t *testing.T) {
tests := []struct {
action string
want *remoteAction
wantCloneURL string
}{
{
action: "actions/heroku@main",
want: &remoteAction{
URL: "",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "https://github.com/actions/heroku",
},
{
action: "actions/aws/ec2@main",
want: &remoteAction{
URL: "",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "https://github.com/actions/aws",
},
{
action: "./.github/actions/my-action", // it's valid for GitHub, but act don't support it
want: nil,
},
{
action: "docker://alpine:3.8", // it's valid for GitHub, but act don't support it
want: nil,
},
{
action: "https://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "https://gitea.com",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "https://gitea.com/actions/heroku",
},
{
action: "https://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "https://gitea.com",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "https://gitea.com/actions/aws",
},
{
action: "http://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "http://gitea.com",
Org: "actions",
Repo: "heroku",
Path: "",
Ref: "main",
},
wantCloneURL: "http://gitea.com/actions/heroku",
},
{
action: "http://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
want: &remoteAction{
URL: "http://gitea.com",
Org: "actions",
Repo: "aws",
Path: "ec2",
Ref: "main",
},
wantCloneURL: "http://gitea.com/actions/aws",
},
}
for _, tt := range tests {
t.Run(tt.action, func(t *testing.T) {
got := newRemoteAction(tt.action)
assert.Equalf(t, tt.want, got, "newRemoteAction(%v)", tt.action)
cloneURL := ""
if got != nil {
cloneURL = got.CloneURL("github.com")
}
assert.Equalf(t, tt.wantCloneURL, cloneURL, "newRemoteAction(%v).CloneURL()", tt.action)
})
}
}
func Test_safeFilename(t *testing.T) {
tests := []struct {
s string

View File

@@ -114,22 +114,24 @@ func (sd *stepDocker) newStepContainer(ctx context.Context, image string, cmd []
binds, mounts := rc.GetBindsAndMounts()
stepContainer := ContainerNewContainer(&container.NewContainerInput{
Cmd: cmd,
Entrypoint: entrypoint,
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createContainerName(rc.jobContainerName(), step.ID),
Env: envList,
Mounts: mounts,
NetworkMode: fmt.Sprintf("container:%s", rc.jobContainerName()),
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Cmd: cmd,
Entrypoint: entrypoint,
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: rc.Config.Secrets["DOCKER_USERNAME"],
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+step.ID),
Env: envList,
Mounts: mounts,
NetworkMode: fmt.Sprintf("container:%s", rc.jobContainerName()),
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
AutoRemove: rc.Config.AutoRemove,
ValidVolumes: rc.Config.ValidVolumes,
})
return stepContainer
}