* refactor: Generalize log ownership to allow for scratch jobs
Importing may fail when creating a project. We don't want to lose this output,
but we don't want to allow users to create a failing project.
This generalizes logs to soon enable one-off situations where a user can upload
their archive, create a project, and watch the output parse to completion.
* Improve file table schema by using hash
* Fix racey test by allowing logs before
* Add debug logging for PostgreSQL insert
* ci: Replace DataDog CI with custom upload script
This will reduce CI time by ~6 minutes across all of
our runners. It's a bit janky, but I believe worth
the slight maintainance burden.
* Fix test race when job would complete too early
* Fix job cancelation override
* Fix race where provisioner job is inserted before project version
* chore: Rename ProjectHistory to ProjectVersion
Version more accurately represents version storage. This
forks from the WorkspaceHistory name, but I think it's
easier to understand Workspace history.
* Rename files
* Standardize tests a bit more
* Remove Server struct from coderdtest
* Improve test coverage for workspace history
* Fix linting errors
* Fix coderd test leak
* Fix coderd test leak
* Improve workspace history logs
* Standardize test structure for codersdk
* Fix linting errors
* Fix WebSocket compression
* Update coderd/workspaces.go
Co-authored-by: Bryan <bryan@coder.com>
* Add test for listing project parameters
* Cache npm dependencies with setup node
* Remove windows npm cache key
Co-authored-by: Bryan <bryan@coder.com>
* chore: Rename ProjectHistory to ProjectVersion
Version more accurately represents version storage. This
forks from the WorkspaceHistory name, but I think it's
easier to understand Workspace history.
* Rename files
This replaces the cdr-basic provisioner type with
"echo". It reads binary data from the directory
and returns the responses in order.
This is used to test project and workspace job logic.
* feat: Add parameter querying to the API
* feat: Add streaming endpoint for workspace history
Enables a buildlog-like flow for reading job output.
* Fix empty parameter source and destination
* Add comment for usage of workspace history logs endpoint
Enforces a consistent test package layout.
This makes it difficult to test internal
functionality, which I believe promotes
healthy decomposition, and minimal package
exports.
This hooks up `storybook`, which the front-end team has enjoyed using in the v1 codebase - it makes it quick and easy to view and test components in isolation.
The `<LoadingButton />` has a simple story added now, so if you run `yarn storybook`, you can preview it in various states:
![2022-01-31 19 24 24](https://user-images.githubusercontent.com/88213859/151908656-27dac0a8-9c6e-4353-ad25-3eafee979bd4.gif)
This will be helpful as we bring more front-end devs to help build v2 out.
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](f6e369438f/drpcwire/error.go (L15)) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](f6e369438f/drpcstream/stream.go (L253))
- [`writeFrame`](f6e369438f/drpcwire/writer.go (L65))
- [`AppendFrame`](f6e369438f/drpcwire/packet.go (L128))
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: f6e369438f/drpcwire/writer.go (L73)
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](8dee580a7f/write.go (L180)), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](a1a9cfc821/flate/stateless.go (L94)), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
Unfortunately along with #133 I missed one other item in moving files to `/site` in #128 - ignoring new configuration files (and other folders) in our `jest.config.js`.
This caused our coverage numbers in front-end code to dip, because files that shouldn't be included (and previously weren't included) were now being tracked for coverage.
Missed in #128 - need to update the `dependabot.yml` to point to the correct package.json, which was moved from `/package.json` -> `/site/package.json`
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
This refactors the front-end collateral to all live within `site` - so no `package.json` at the root.
The reason we had this initially is that the jest test run and NextJS actually require having _two_ different `tsconfig`s - Next needs `jsx:"preserve"`, while jest needs `jsx:"react"` - we were using `tsconfig`s at different levels at the hierarchy to manage this.
I changed this behavior to still use two different `tsconfig.json`s, which is mandatory - but just side-by-side in `site`.
Once that's fixed, it was easy to move everything into `site`
Follow up from: https://github.com/coder/coder/pull/118#discussion_r796244577
This brings an async service that parses and
provisions to life! It's separated from coderd
intentionally to allow for simpler testing.
Integration with coderd will come in another PR!
@kylecarbs and I were debugging a gnarly postgres issue over the weekend, and unfortunately it looks like it is still coming up occassionally: https://github.com/coder/coder/runs/5014420662?check_suite_focus=true#step:8:35 - so thought this might be a good testing Monday task.
Intermittently, the test would fail with something like a `401` - invalid e-mail, or a `409` - initial user already created. This was quite surprising, because the tests are designed to spin up their own, isolated database.
We tried a few things to debug this...
## Attempt 1: Log out the generated port numbers when running the docker image.
Based on the errors, it seemed like one test must be connecting to another test's database - that would explain why we'd get these conflicts! However, logging out the port number that came from docker always gave a unique number... and we couldn't find evidence of one database connecting to another.
## Attempt 2: Store the database in unique, temporary folder.
@kylecarbs and I found that the there was a [volume](a83005b407/11/alpine/Dockerfile (L155)) for the postgres data... so @kylecarbs implemented mounting the volume to a unique, per-test temporary folder in https://github.com/coder/coder/pull/89
It sounded really promising... but unfortunately we hit the issue again!
### Attempt 3... this PR
After we hit the failure again, we noticed in the `docker ps` logs something quite strange:
![image](https://user-images.githubusercontent.com/88213859/151913133-522a6c2e-977a-4a65-9315-804531ab7d77.png)
When the docker image is run - it creates two port bindings, an IPv4 and an IPv6 one. These _should be the same_ - but surprisingly, they can sometimes be different. It isn't deterministic, and seems to be more common when there are multiple containers running. Importantly, __they can overlap__ as in the above image.
Turns out, it seems this is a docker bug: https://github.com/moby/moby/issues/42442 - which may be fixed in newer versions.
To work around this bug, we have to manipulate the port bindings (like you would with `-p`) at the command line. We can do this with `docker`/`dockertest`, but it means we have to get a free port ahead of time to know which port to map.
With that fix in - the `docker ps` is a little more sane:
![image](https://user-images.githubusercontent.com/88213859/151913432-5f86bc09-8604-4355-ad49-0abeaf8cc0fe.png)
...and hopefully means we can safely run the containers in parallel again.
Use the native 'concurrency' configuration feature to cancel
concurrent builds, rather than the cancel-workflow-action.
This also allows us to reduce permissions for the workflow.