2023-10-30 17:42:20 +00:00
|
|
|
package dbmem
|
2022-01-20 13:46:51 +00:00
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
|
|
|
"database/sql"
|
2022-11-16 22:34:06 +00:00
|
|
|
"encoding/json"
|
2023-05-25 16:35:47 +00:00
|
|
|
"errors"
|
2023-01-23 11:14:47 +00:00
|
|
|
"fmt"
|
|
|
|
"reflect"
|
2023-04-17 19:57:21 +00:00
|
|
|
"regexp"
|
2022-04-22 20:27:55 +00:00
|
|
|
"sort"
|
2022-01-24 17:07:42 +00:00
|
|
|
"strings"
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
"sync"
|
2022-05-17 20:00:48 +00:00
|
|
|
"time"
|
2022-01-24 17:07:42 +00:00
|
|
|
|
|
|
|
"github.com/google/uuid"
|
2022-08-26 09:28:38 +00:00
|
|
|
"github.com/lib/pq"
|
2022-09-01 19:58:23 +00:00
|
|
|
"golang.org/x/exp/maps"
|
2022-05-02 19:30:46 +00:00
|
|
|
"golang.org/x/exp/slices"
|
2022-10-10 20:37:06 +00:00
|
|
|
"golang.org/x/xerrors"
|
2022-01-20 13:46:51 +00:00
|
|
|
|
2023-08-18 18:55:43 +00:00
|
|
|
"github.com/coder/coder/v2/coderd/database"
|
2023-09-01 16:50:12 +00:00
|
|
|
"github.com/coder/coder/v2/coderd/database/dbtime"
|
2023-08-18 18:55:43 +00:00
|
|
|
"github.com/coder/coder/v2/coderd/rbac"
|
|
|
|
"github.com/coder/coder/v2/coderd/rbac/regosql"
|
|
|
|
"github.com/coder/coder/v2/coderd/util/slice"
|
2024-01-17 16:41:42 +00:00
|
|
|
"github.com/coder/coder/v2/coderd/workspaceapps/appurl"
|
2023-08-18 18:55:43 +00:00
|
|
|
"github.com/coder/coder/v2/codersdk"
|
2023-12-13 12:31:40 +00:00
|
|
|
"github.com/coder/coder/v2/provisionersdk"
|
2022-01-20 13:46:51 +00:00
|
|
|
)
|
|
|
|
|
2023-04-20 23:59:45 +00:00
|
|
|
var validProxyByHostnameRegex = regexp.MustCompile(`^[a-zA-Z0-9._-]+$`)
|
2023-04-17 19:57:21 +00:00
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
var errForeignKeyConstraint = &pq.Error{
|
|
|
|
Code: "23503",
|
|
|
|
Message: "update or delete on table violates foreign key constraint",
|
|
|
|
}
|
|
|
|
|
2022-10-10 20:37:06 +00:00
|
|
|
var errDuplicateKey = &pq.Error{
|
|
|
|
Code: "23505",
|
|
|
|
Message: "duplicate key value violates unique constraint",
|
|
|
|
}
|
|
|
|
|
2022-01-20 13:46:51 +00:00
|
|
|
// New returns an in-memory fake of the database.
|
|
|
|
func New() database.Store {
|
2023-07-12 09:35:29 +00:00
|
|
|
q := &FakeQuerier{
|
2022-06-17 20:50:11 +00:00
|
|
|
mutex: &sync.RWMutex{},
|
|
|
|
data: &data{
|
2022-11-18 22:46:53 +00:00
|
|
|
apiKeys: make([]database.APIKey, 0),
|
|
|
|
organizationMembers: make([]database.OrganizationMember, 0),
|
|
|
|
organizations: make([]database.Organization, 0),
|
|
|
|
users: make([]database.User, 0),
|
2023-09-06 11:06:26 +00:00
|
|
|
dbcryptKeys: make([]database.DBCryptKey, 0),
|
2023-09-29 19:13:20 +00:00
|
|
|
externalAuthLinks: make([]database.ExternalAuthLink, 0),
|
2022-11-18 22:46:53 +00:00
|
|
|
groups: make([]database.Group, 0),
|
|
|
|
groupMembers: make([]database.GroupMember, 0),
|
|
|
|
auditLogs: make([]database.AuditLog, 0),
|
|
|
|
files: make([]database.File, 0),
|
|
|
|
gitSSHKey: make([]database.GitSSHKey, 0),
|
|
|
|
parameterSchemas: make([]database.ParameterSchema, 0),
|
|
|
|
provisionerDaemons: make([]database.ProvisionerDaemon, 0),
|
|
|
|
workspaceAgents: make([]database.WorkspaceAgent, 0),
|
|
|
|
provisionerJobLogs: make([]database.ProvisionerJobLog, 0),
|
|
|
|
workspaceResources: make([]database.WorkspaceResource, 0),
|
|
|
|
workspaceResourceMetadata: make([]database.WorkspaceResourceMetadatum, 0),
|
|
|
|
provisionerJobs: make([]database.ProvisionerJob, 0),
|
2023-07-25 13:14:38 +00:00
|
|
|
templateVersions: make([]database.TemplateVersionTable, 0),
|
2023-07-19 20:07:33 +00:00
|
|
|
templates: make([]database.TemplateTable, 0),
|
2023-02-28 19:33:33 +00:00
|
|
|
workspaceAgentStats: make([]database.WorkspaceAgentStat, 0),
|
2023-07-28 15:57:23 +00:00
|
|
|
workspaceAgentLogs: make([]database.WorkspaceAgentLog, 0),
|
2023-07-25 13:14:38 +00:00
|
|
|
workspaceBuilds: make([]database.WorkspaceBuildTable, 0),
|
2022-11-18 22:46:53 +00:00
|
|
|
workspaceApps: make([]database.WorkspaceApp, 0),
|
|
|
|
workspaces: make([]database.Workspace, 0),
|
|
|
|
licenses: make([]database.License, 0),
|
2023-04-04 20:07:29 +00:00
|
|
|
workspaceProxies: make([]database.WorkspaceProxy, 0),
|
2023-03-07 19:38:11 +00:00
|
|
|
locks: map[int64]struct{}{},
|
2022-06-17 20:50:11 +00:00
|
|
|
},
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
2023-06-08 15:30:15 +00:00
|
|
|
q.defaultProxyDisplayName = "Default"
|
|
|
|
q.defaultProxyIconURL = "/emojis/1f3e1.png"
|
|
|
|
return q
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
|
|
|
|
2022-06-17 20:50:11 +00:00
|
|
|
type rwMutex interface {
|
|
|
|
Lock()
|
|
|
|
RLock()
|
|
|
|
Unlock()
|
|
|
|
RUnlock()
|
|
|
|
}
|
|
|
|
|
|
|
|
// inTxMutex is a no op, since inside a transaction we are already locked.
|
|
|
|
type inTxMutex struct{}
|
|
|
|
|
|
|
|
func (inTxMutex) Lock() {}
|
|
|
|
func (inTxMutex) RLock() {}
|
|
|
|
func (inTxMutex) Unlock() {}
|
|
|
|
func (inTxMutex) RUnlock() {}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
// FakeQuerier replicates database functionality to enable quick testing. It's an exported type so that our test code
|
|
|
|
// can do type checks.
|
|
|
|
type FakeQuerier struct {
|
2022-06-17 20:50:11 +00:00
|
|
|
mutex rwMutex
|
|
|
|
*data
|
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) Wrappers() []string {
|
2023-05-31 13:55:57 +00:00
|
|
|
return []string{}
|
|
|
|
}
|
|
|
|
|
2023-03-07 19:38:11 +00:00
|
|
|
type fakeTx struct {
|
2023-07-12 09:35:29 +00:00
|
|
|
*FakeQuerier
|
2023-03-07 19:38:11 +00:00
|
|
|
locks map[int64]struct{}
|
|
|
|
}
|
|
|
|
|
2022-06-17 20:50:11 +00:00
|
|
|
type data struct {
|
2022-01-24 17:07:42 +00:00
|
|
|
// Legacy tables
|
2022-01-23 05:58:10 +00:00
|
|
|
apiKeys []database.APIKey
|
|
|
|
organizations []database.Organization
|
|
|
|
organizationMembers []database.OrganizationMember
|
|
|
|
users []database.User
|
2022-08-17 23:00:53 +00:00
|
|
|
userLinks []database.UserLink
|
2022-01-24 17:07:42 +00:00
|
|
|
|
|
|
|
// New tables
|
2023-08-16 12:22:00 +00:00
|
|
|
workspaceAgentStats []database.WorkspaceAgentStat
|
|
|
|
auditLogs []database.AuditLog
|
2023-09-06 11:06:26 +00:00
|
|
|
dbcryptKeys []database.DBCryptKey
|
2023-08-16 12:22:00 +00:00
|
|
|
files []database.File
|
2023-09-29 19:13:20 +00:00
|
|
|
externalAuthLinks []database.ExternalAuthLink
|
2023-08-16 12:22:00 +00:00
|
|
|
gitSSHKey []database.GitSSHKey
|
|
|
|
groupMembers []database.GroupMember
|
|
|
|
groups []database.Group
|
2024-01-30 01:30:02 +00:00
|
|
|
jfrogXRayScans []database.JfrogXrayScan
|
2023-08-16 12:22:00 +00:00
|
|
|
licenses []database.License
|
2023-12-21 21:38:42 +00:00
|
|
|
oauth2ProviderApps []database.OAuth2ProviderApp
|
|
|
|
oauth2ProviderAppSecrets []database.OAuth2ProviderAppSecret
|
2023-08-16 12:22:00 +00:00
|
|
|
parameterSchemas []database.ParameterSchema
|
|
|
|
provisionerDaemons []database.ProvisionerDaemon
|
|
|
|
provisionerJobLogs []database.ProvisionerJobLog
|
|
|
|
provisionerJobs []database.ProvisionerJob
|
|
|
|
replicas []database.Replica
|
|
|
|
templateVersions []database.TemplateVersionTable
|
|
|
|
templateVersionParameters []database.TemplateVersionParameter
|
|
|
|
templateVersionVariables []database.TemplateVersionVariable
|
|
|
|
templates []database.TemplateTable
|
|
|
|
workspaceAgents []database.WorkspaceAgent
|
|
|
|
workspaceAgentMetadata []database.WorkspaceAgentMetadatum
|
|
|
|
workspaceAgentLogs []database.WorkspaceAgentLog
|
2023-09-25 21:47:17 +00:00
|
|
|
workspaceAgentLogSources []database.WorkspaceAgentLogSource
|
|
|
|
workspaceAgentScripts []database.WorkspaceAgentScript
|
2024-02-13 14:31:20 +00:00
|
|
|
workspaceAgentPortShares []database.WorkspaceAgentPortShare
|
2023-08-16 12:22:00 +00:00
|
|
|
workspaceApps []database.WorkspaceApp
|
|
|
|
workspaceAppStatsLastInsertID int64
|
|
|
|
workspaceAppStats []database.WorkspaceAppStat
|
|
|
|
workspaceBuilds []database.WorkspaceBuildTable
|
|
|
|
workspaceBuildParameters []database.WorkspaceBuildParameter
|
|
|
|
workspaceResourceMetadata []database.WorkspaceResourceMetadatum
|
|
|
|
workspaceResources []database.WorkspaceResource
|
|
|
|
workspaces []database.Workspace
|
|
|
|
workspaceProxies []database.WorkspaceProxy
|
2023-03-07 19:38:11 +00:00
|
|
|
// Locks is a map of lock names. Any keys within the map are currently
|
|
|
|
// locked.
|
2023-06-08 15:30:15 +00:00
|
|
|
locks map[int64]struct{}
|
|
|
|
deploymentID string
|
|
|
|
derpMeshKey string
|
|
|
|
lastUpdateCheck []byte
|
|
|
|
serviceBanner []byte
|
2023-11-23 16:18:12 +00:00
|
|
|
healthSettings []byte
|
2023-09-27 15:02:18 +00:00
|
|
|
applicationName string
|
2023-06-08 15:30:15 +00:00
|
|
|
logoURL string
|
|
|
|
appSecurityKey string
|
2023-06-30 12:38:48 +00:00
|
|
|
oauthSigningKey string
|
2023-06-08 15:30:15 +00:00
|
|
|
lastLicenseID int32
|
|
|
|
defaultProxyDisplayName string
|
|
|
|
defaultProxyIconURL string
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
|
|
|
|
2023-01-23 11:14:47 +00:00
|
|
|
func validateDatabaseTypeWithValid(v reflect.Value) (handled bool, err error) {
|
|
|
|
if v.Kind() == reflect.Struct {
|
|
|
|
return false, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if v.CanInterface() {
|
|
|
|
if !strings.Contains(v.Type().PkgPath(), "coderd/database") {
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
if valid, ok := v.Interface().(interface{ Valid() bool }); ok {
|
|
|
|
if !valid.Valid() {
|
|
|
|
return true, xerrors.Errorf("invalid %s: %q", v.Type().Name(), v.Interface())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
return false, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// validateDatabaseType uses reflect to check if struct properties are types
|
|
|
|
// with a Valid() bool function set. If so, call it and return an error
|
|
|
|
// if false.
|
|
|
|
//
|
|
|
|
// Note that we only check immediate values and struct fields. We do not
|
|
|
|
// recurse into nested structs.
|
|
|
|
func validateDatabaseType(args interface{}) error {
|
|
|
|
v := reflect.ValueOf(args)
|
|
|
|
|
|
|
|
// Note: database.Null* types don't have a Valid method, we skip them here
|
|
|
|
// because their embedded types may have a Valid method and we don't want
|
|
|
|
// to bother with checking both that the Valid field is true and that the
|
|
|
|
// type it embeds validates to true. We would need to check:
|
|
|
|
//
|
|
|
|
// dbNullEnum.Valid && dbNullEnum.Enum.Valid()
|
|
|
|
if strings.HasPrefix(v.Type().Name(), "Null") {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if ok, err := validateDatabaseTypeWithValid(v); ok {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
switch v.Kind() {
|
|
|
|
case reflect.Struct:
|
|
|
|
var errs []string
|
|
|
|
for i := 0; i < v.NumField(); i++ {
|
|
|
|
field := v.Field(i)
|
|
|
|
if ok, err := validateDatabaseTypeWithValid(field); ok && err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("%s.%s: %s", v.Type().Name(), v.Type().Field(i).Name, err.Error()))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if len(errs) > 0 {
|
|
|
|
return xerrors.Errorf("invalid database type fields:\n\t%s", strings.Join(errs, "\n\t"))
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
panic(fmt.Sprintf("unhandled type: %s", v.Type().Name()))
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) Ping(_ context.Context) (time.Duration, error) {
|
2022-10-17 13:43:30 +00:00
|
|
|
return 0, nil
|
|
|
|
}
|
|
|
|
|
2023-03-07 19:38:11 +00:00
|
|
|
func (tx *fakeTx) AcquireLock(_ context.Context, id int64) error {
|
2023-07-12 09:35:29 +00:00
|
|
|
if _, ok := tx.FakeQuerier.locks[id]; ok {
|
2023-03-07 19:38:11 +00:00
|
|
|
return xerrors.Errorf("cannot acquire lock %d: already held", id)
|
|
|
|
}
|
2023-07-12 09:35:29 +00:00
|
|
|
tx.FakeQuerier.locks[id] = struct{}{}
|
2023-03-07 19:38:11 +00:00
|
|
|
tx.locks[id] = struct{}{}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tx *fakeTx) TryAcquireLock(_ context.Context, id int64) (bool, error) {
|
2023-07-12 09:35:29 +00:00
|
|
|
if _, ok := tx.FakeQuerier.locks[id]; ok {
|
2023-03-07 19:38:11 +00:00
|
|
|
return false, nil
|
|
|
|
}
|
2023-07-12 09:35:29 +00:00
|
|
|
tx.FakeQuerier.locks[id] = struct{}{}
|
2023-03-07 19:38:11 +00:00
|
|
|
tx.locks[id] = struct{}{}
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tx *fakeTx) releaseLocks() {
|
|
|
|
for id := range tx.locks {
|
2023-07-12 09:35:29 +00:00
|
|
|
delete(tx.FakeQuerier.locks, id)
|
2023-03-07 19:38:11 +00:00
|
|
|
}
|
|
|
|
tx.locks = map[int64]struct{}{}
|
|
|
|
}
|
|
|
|
|
2022-01-20 13:46:51 +00:00
|
|
|
// InTx doesn't rollback data properly for in-memory yet.
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) InTx(fn func(database.Store) error, _ *sql.TxOptions) error {
|
2022-06-17 20:50:11 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-03-07 19:38:11 +00:00
|
|
|
tx := &fakeTx{
|
2023-07-12 09:35:29 +00:00
|
|
|
FakeQuerier: &FakeQuerier{mutex: inTxMutex{}, data: q.data},
|
2023-03-07 19:38:11 +00:00
|
|
|
locks: map[int64]struct{}{},
|
|
|
|
}
|
|
|
|
defer tx.releaseLocks()
|
|
|
|
|
|
|
|
return fn(tx)
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// getUserByIDNoLock is used by other functions in the database fake.
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getUserByIDNoLock(id uuid.UUID) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, user := range q.users {
|
|
|
|
if user.ID == id {
|
|
|
|
return user, nil
|
2022-11-16 22:34:06 +00:00
|
|
|
}
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2022-11-09 15:27:05 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
func convertUsers(users []database.User, count int64) []database.GetUsersRow {
|
|
|
|
rows := make([]database.GetUsersRow, len(users))
|
|
|
|
for i, u := range users {
|
|
|
|
rows[i] = database.GetUsersRow{
|
|
|
|
ID: u.ID,
|
|
|
|
Email: u.Email,
|
|
|
|
Username: u.Username,
|
|
|
|
HashedPassword: u.HashedPassword,
|
|
|
|
CreatedAt: u.CreatedAt,
|
|
|
|
UpdatedAt: u.UpdatedAt,
|
|
|
|
Status: u.Status,
|
|
|
|
RBACRoles: u.RBACRoles,
|
|
|
|
LoginType: u.LoginType,
|
|
|
|
AvatarURL: u.AvatarURL,
|
|
|
|
Deleted: u.Deleted,
|
|
|
|
LastSeenAt: u.LastSeenAt,
|
|
|
|
Count: count,
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return rows
|
|
|
|
}
|
2022-09-01 19:58:23 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// mapAgentStatus determines the agent status based on different timestamps like created_at, last_connected_at, disconnected_at, etc.
|
|
|
|
// The function must be in sync with: coderd/workspaceagents.go:convertWorkspaceAgent.
|
|
|
|
func mapAgentStatus(dbAgent database.WorkspaceAgent, agentInactiveDisconnectTimeoutSeconds int64) string {
|
|
|
|
var status string
|
|
|
|
connectionTimeout := time.Duration(dbAgent.ConnectionTimeoutSeconds) * time.Second
|
|
|
|
switch {
|
|
|
|
case !dbAgent.FirstConnectedAt.Valid:
|
|
|
|
switch {
|
2023-09-01 16:50:12 +00:00
|
|
|
case connectionTimeout > 0 && dbtime.Now().Sub(dbAgent.CreatedAt) > connectionTimeout:
|
2023-07-13 17:12:29 +00:00
|
|
|
// If the agent took too long to connect the first time,
|
|
|
|
// mark it as timed out.
|
|
|
|
status = "timeout"
|
|
|
|
default:
|
|
|
|
// If the agent never connected, it's waiting for the compute
|
|
|
|
// to start up.
|
|
|
|
status = "connecting"
|
2022-09-01 19:58:23 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
case dbAgent.DisconnectedAt.Time.After(dbAgent.LastConnectedAt.Time):
|
|
|
|
// If we've disconnected after our last connection, we know the
|
|
|
|
// agent is no longer connected.
|
|
|
|
status = "disconnected"
|
2023-09-01 16:50:12 +00:00
|
|
|
case dbtime.Now().Sub(dbAgent.LastConnectedAt.Time) > time.Duration(agentInactiveDisconnectTimeoutSeconds)*time.Second:
|
2023-07-13 17:12:29 +00:00
|
|
|
// The connection died without updating the last connected.
|
|
|
|
status = "disconnected"
|
|
|
|
case dbAgent.LastConnectedAt.Valid:
|
|
|
|
// The agent should be assumed connected if it's under inactivity timeouts
|
|
|
|
// and last connected at has been properly set.
|
|
|
|
status = "connected"
|
|
|
|
default:
|
|
|
|
panic("unknown agent status: " + status)
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return status
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) convertToWorkspaceRowsNoLock(ctx context.Context, workspaces []database.Workspace, count int64) []database.GetWorkspacesRow {
|
|
|
|
rows := make([]database.GetWorkspacesRow, 0, len(workspaces))
|
|
|
|
for _, w := range workspaces {
|
|
|
|
wr := database.GetWorkspacesRow{
|
|
|
|
ID: w.ID,
|
|
|
|
CreatedAt: w.CreatedAt,
|
|
|
|
UpdatedAt: w.UpdatedAt,
|
|
|
|
OwnerID: w.OwnerID,
|
|
|
|
OrganizationID: w.OrganizationID,
|
|
|
|
TemplateID: w.TemplateID,
|
|
|
|
Deleted: w.Deleted,
|
|
|
|
Name: w.Name,
|
|
|
|
AutostartSchedule: w.AutostartSchedule,
|
|
|
|
Ttl: w.Ttl,
|
|
|
|
LastUsedAt: w.LastUsedAt,
|
2023-08-24 18:25:54 +00:00
|
|
|
DormantAt: w.DormantAt,
|
2023-07-21 03:01:11 +00:00
|
|
|
DeletingAt: w.DeletingAt,
|
2023-07-13 17:12:29 +00:00
|
|
|
Count: count,
|
2023-10-06 09:27:12 +00:00
|
|
|
AutomaticUpdates: w.AutomaticUpdates,
|
2024-01-24 13:39:19 +00:00
|
|
|
Favorite: w.Favorite,
|
2023-03-07 14:25:04 +00:00
|
|
|
}
|
2022-09-01 19:58:23 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, t := range q.templates {
|
|
|
|
if t.ID == w.TemplateID {
|
|
|
|
wr.TemplateName = t.Name
|
|
|
|
break
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-09-01 19:58:23 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, w.ID); err == nil {
|
|
|
|
for _, tv := range q.templateVersions {
|
|
|
|
if tv.ID == build.TemplateVersionID {
|
|
|
|
wr.TemplateVersionID = tv.ID
|
|
|
|
wr.TemplateVersionName = sql.NullString{
|
|
|
|
Valid: true,
|
|
|
|
String: tv.Name,
|
|
|
|
}
|
|
|
|
break
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-09-09 19:30:31 +00:00
|
|
|
}
|
2022-09-01 19:58:23 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
rows = append(rows, wr)
|
|
|
|
}
|
|
|
|
return rows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) getWorkspaceByIDNoLock(_ context.Context, id uuid.UUID) (database.Workspace, error) {
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if workspace.ID == id {
|
|
|
|
return workspace, nil
|
2023-03-07 14:25:04 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
2023-01-26 01:03:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceByAgentIDNoLock(_ context.Context, agentID uuid.UUID) (database.Workspace, error) {
|
|
|
|
var agent database.WorkspaceAgent
|
|
|
|
for _, _agent := range q.workspaceAgents {
|
|
|
|
if _agent.ID == agentID {
|
|
|
|
agent = _agent
|
|
|
|
break
|
2022-10-15 20:36:50 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
if agent.ID == uuid.Nil {
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-15 20:36:50 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
var resource database.WorkspaceResource
|
|
|
|
for _, _resource := range q.workspaceResources {
|
|
|
|
if _resource.ID == agent.ResourceID {
|
|
|
|
resource = _resource
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if resource.ID == uuid.Nil {
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-17 04:34:03 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
var build database.WorkspaceBuild
|
|
|
|
for _, _build := range q.workspaceBuilds {
|
|
|
|
if _build.JobID == resource.JobID {
|
2023-07-25 13:14:38 +00:00
|
|
|
build = q.workspaceBuildWithUserNoLock(_build)
|
2023-07-13 17:12:29 +00:00
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if build.ID == uuid.Nil {
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-06 19:02:27 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if workspace.ID == build.WorkspaceID {
|
|
|
|
return workspace, nil
|
|
|
|
}
|
|
|
|
}
|
2023-02-23 15:00:27 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
2023-02-23 15:00:27 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceBuildByIDNoLock(_ context.Context, id uuid.UUID) (database.WorkspaceBuild, error) {
|
2023-07-25 13:14:38 +00:00
|
|
|
for _, build := range q.workspaceBuilds {
|
|
|
|
if build.ID == id {
|
|
|
|
return q.workspaceBuildWithUserNoLock(build), nil
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.WorkspaceBuild{}, sql.ErrNoRows
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getLatestWorkspaceBuildByWorkspaceIDNoLock(_ context.Context, workspaceID uuid.UUID) (database.WorkspaceBuild, error) {
|
2023-07-13 17:12:29 +00:00
|
|
|
var row database.WorkspaceBuild
|
|
|
|
var buildNum int32 = -1
|
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
if workspaceBuild.WorkspaceID == workspaceID && workspaceBuild.BuildNumber > buildNum {
|
2023-07-25 13:14:38 +00:00
|
|
|
row = q.workspaceBuildWithUserNoLock(workspaceBuild)
|
2023-07-13 17:12:29 +00:00
|
|
|
buildNum = workspaceBuild.BuildNumber
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
if buildNum == -1 {
|
|
|
|
return database.WorkspaceBuild{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
return row, nil
|
2022-10-04 15:35:33 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) getTemplateByIDNoLock(_ context.Context, id uuid.UUID) (database.Template, error) {
|
|
|
|
for _, template := range q.templates {
|
|
|
|
if template.ID == id {
|
2023-07-19 20:07:33 +00:00
|
|
|
return q.templateWithUserNoLock(template), nil
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Template{}, sql.ErrNoRows
|
|
|
|
}
|
2023-02-14 14:27:06 +00:00
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
func (q *FakeQuerier) templatesWithUserNoLock(tpl []database.TemplateTable) []database.Template {
|
|
|
|
cpy := make([]database.Template, 0, len(tpl))
|
|
|
|
for _, t := range tpl {
|
|
|
|
cpy = append(cpy, q.templateWithUserNoLock(t))
|
|
|
|
}
|
|
|
|
return cpy
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) templateWithUserNoLock(tpl database.TemplateTable) database.Template {
|
|
|
|
var user database.User
|
|
|
|
for _, _user := range q.users {
|
|
|
|
if _user.ID == tpl.CreatedBy {
|
|
|
|
user = _user
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
var withUser database.Template
|
|
|
|
// This is a cheeky way to copy the fields over without explicitly listing them all.
|
|
|
|
d, _ := json.Marshal(tpl)
|
|
|
|
_ = json.Unmarshal(d, &withUser)
|
|
|
|
withUser.CreatedByUsername = user.Username
|
2023-07-25 13:14:38 +00:00
|
|
|
withUser.CreatedByAvatarURL = user.AvatarURL
|
|
|
|
return withUser
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) templateVersionWithUserNoLock(tpl database.TemplateVersionTable) database.TemplateVersion {
|
|
|
|
var user database.User
|
|
|
|
for _, _user := range q.users {
|
|
|
|
if _user.ID == tpl.CreatedBy {
|
|
|
|
user = _user
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
var withUser database.TemplateVersion
|
|
|
|
// This is a cheeky way to copy the fields over without explicitly listing them all.
|
|
|
|
d, _ := json.Marshal(tpl)
|
|
|
|
_ = json.Unmarshal(d, &withUser)
|
|
|
|
withUser.CreatedByUsername = user.Username
|
|
|
|
withUser.CreatedByAvatarURL = user.AvatarURL
|
|
|
|
return withUser
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) workspaceBuildWithUserNoLock(tpl database.WorkspaceBuildTable) database.WorkspaceBuild {
|
|
|
|
var user database.User
|
|
|
|
for _, _user := range q.users {
|
|
|
|
if _user.ID == tpl.InitiatorID {
|
|
|
|
user = _user
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
var withUser database.WorkspaceBuild
|
|
|
|
// This is a cheeky way to copy the fields over without explicitly listing them all.
|
|
|
|
d, _ := json.Marshal(tpl)
|
|
|
|
_ = json.Unmarshal(d, &withUser)
|
|
|
|
withUser.InitiatorByUsername = user.Username
|
|
|
|
withUser.InitiatorByAvatarUrl = user.AvatarURL
|
2023-07-19 20:07:33 +00:00
|
|
|
return withUser
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) getTemplateVersionByIDNoLock(_ context.Context, templateVersionID uuid.UUID) (database.TemplateVersion, error) {
|
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.ID != templateVersionID {
|
2022-05-18 15:09:07 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return q.templateVersionWithUserNoLock(templateVersion), nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.TemplateVersion{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceAgentByIDNoLock(_ context.Context, id uuid.UUID) (database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
// The schema sorts this by created at, so we iterate the array backwards.
|
|
|
|
for i := len(q.workspaceAgents) - 1; i >= 0; i-- {
|
|
|
|
agent := q.workspaceAgents[i]
|
|
|
|
if agent.ID == id {
|
|
|
|
return agent, nil
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
return database.WorkspaceAgent{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceAgentsByResourceIDsNoLock(_ context.Context, resourceIDs []uuid.UUID) ([]database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
workspaceAgents := make([]database.WorkspaceAgent, 0)
|
|
|
|
for _, agent := range q.workspaceAgents {
|
|
|
|
for _, resourceID := range resourceIDs {
|
|
|
|
if agent.ResourceID != resourceID {
|
2022-06-25 11:22:59 +00:00
|
|
|
continue
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
workspaceAgents = append(workspaceAgents, agent)
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
return workspaceAgents, nil
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceAppByAgentIDAndSlugNoLock(_ context.Context, arg database.GetWorkspaceAppByAgentIDAndSlugParams) (database.WorkspaceApp, error) {
|
|
|
|
for _, app := range q.workspaceApps {
|
|
|
|
if app.AgentID != arg.AgentID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if app.Slug != arg.Slug {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
return app, nil
|
|
|
|
}
|
|
|
|
return database.WorkspaceApp{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getProvisionerJobByIDNoLock(_ context.Context, id uuid.UUID) (database.ProvisionerJob, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, provisionerJob := range q.provisionerJobs {
|
|
|
|
if provisionerJob.ID != id {
|
2022-05-18 15:09:07 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before returning, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
provisionerJob.Tags = maps.Clone(provisionerJob.Tags)
|
2023-06-12 22:40:58 +00:00
|
|
|
return provisionerJob, nil
|
|
|
|
}
|
|
|
|
return database.ProvisionerJob{}, sql.ErrNoRows
|
|
|
|
}
|
2022-06-25 11:22:59 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getWorkspaceResourcesByJobIDNoLock(_ context.Context, jobID uuid.UUID) ([]database.WorkspaceResource, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
resources := make([]database.WorkspaceResource, 0)
|
|
|
|
for _, resource := range q.workspaceResources {
|
|
|
|
if resource.JobID != jobID {
|
2022-06-03 17:20:28 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
resources = append(resources, resource)
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) getGroupByIDNoLock(_ context.Context, id uuid.UUID) (database.Group, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, group := range q.groups {
|
|
|
|
if group.ID == id {
|
|
|
|
return group, nil
|
|
|
|
}
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.Group{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-06-21 12:20:58 +00:00
|
|
|
// ErrUnimplemented is returned by methods only used by the enterprise/tailnet.pgCoord. This coordinator explicitly
|
|
|
|
// depends on postgres triggers that announce changes on the pubsub. Implementing support for this in the fake
|
2023-07-12 09:35:29 +00:00
|
|
|
// database would strongly couple the FakeQuerier to the pubsub, which is undesirable. Furthermore, it makes little
|
|
|
|
// sense to directly test the pgCoord against anything other than postgres. The FakeQuerier is designed to allow us to
|
2023-06-21 12:20:58 +00:00
|
|
|
// test the Coderd API, and for that kind of test, the in-memory, AGPL tailnet coordinator is sufficient. Therefore,
|
2023-07-12 09:35:29 +00:00
|
|
|
// these methods remain unimplemented in the FakeQuerier.
|
2023-06-21 12:20:58 +00:00
|
|
|
var ErrUnimplemented = xerrors.New("unimplemented")
|
|
|
|
|
2023-08-03 14:43:23 +00:00
|
|
|
func uniqueSortedUUIDs(uuids []uuid.UUID) []uuid.UUID {
|
|
|
|
set := make(map[uuid.UUID]struct{})
|
|
|
|
for _, id := range uuids {
|
|
|
|
set[id] = struct{}{}
|
|
|
|
}
|
|
|
|
unique := make([]uuid.UUID, 0, len(set))
|
|
|
|
for id := range set {
|
|
|
|
unique = append(unique, id)
|
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(unique, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
2023-08-03 14:43:23 +00:00
|
|
|
})
|
|
|
|
return unique
|
|
|
|
}
|
|
|
|
|
2023-08-23 16:54:16 +00:00
|
|
|
func (q *FakeQuerier) getOrganizationMemberNoLock(orgID uuid.UUID) []database.OrganizationMember {
|
2023-08-17 18:25:16 +00:00
|
|
|
var members []database.OrganizationMember
|
|
|
|
for _, member := range q.organizationMembers {
|
|
|
|
if member.OrganizationID == orgID {
|
|
|
|
members = append(members, member)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return members
|
|
|
|
}
|
|
|
|
|
2023-08-23 16:54:16 +00:00
|
|
|
// getEveryoneGroupMembersNoLock fetches all the users in an organization.
|
|
|
|
func (q *FakeQuerier) getEveryoneGroupMembersNoLock(orgID uuid.UUID) []database.User {
|
2023-08-17 18:25:16 +00:00
|
|
|
var (
|
|
|
|
everyone []database.User
|
2023-08-23 16:54:16 +00:00
|
|
|
orgMembers = q.getOrganizationMemberNoLock(orgID)
|
2023-08-17 18:25:16 +00:00
|
|
|
)
|
|
|
|
for _, member := range orgMembers {
|
2023-08-23 16:54:16 +00:00
|
|
|
user, err := q.getUserByIDNoLock(member.UserID)
|
2023-08-17 18:25:16 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
everyone = append(everyone, user)
|
|
|
|
}
|
|
|
|
return everyone
|
|
|
|
}
|
|
|
|
|
|
|
|
// isEveryoneGroup returns true if the provided ID matches
|
|
|
|
// an organization ID.
|
|
|
|
func (q *FakeQuerier) isEveryoneGroup(id uuid.UUID) bool {
|
|
|
|
for _, org := range q.organizations {
|
|
|
|
if org.ID == id {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
func (q *FakeQuerier) GetActiveDBCryptKeys(_ context.Context) ([]database.DBCryptKey, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
ks := make([]database.DBCryptKey, 0, len(q.dbcryptKeys))
|
|
|
|
for _, k := range q.dbcryptKeys {
|
|
|
|
if !k.ActiveKeyDigest.Valid {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
ks = append([]database.DBCryptKey{}, k)
|
|
|
|
}
|
|
|
|
return ks, nil
|
|
|
|
}
|
|
|
|
|
2023-11-15 15:42:27 +00:00
|
|
|
func maxTime(t, u time.Time) time.Time {
|
|
|
|
if t.After(u) {
|
|
|
|
return t
|
|
|
|
}
|
|
|
|
return u
|
|
|
|
}
|
|
|
|
|
2023-09-14 08:09:51 +00:00
|
|
|
func minTime(t, u time.Time) time.Time {
|
|
|
|
if t.Before(u) {
|
|
|
|
return t
|
|
|
|
}
|
|
|
|
return u
|
|
|
|
}
|
|
|
|
|
2023-10-05 01:57:46 +00:00
|
|
|
func provisonerJobStatus(j database.ProvisionerJob) database.ProvisionerJobStatus {
|
|
|
|
if isNotNull(j.CompletedAt) {
|
|
|
|
if j.Error.String != "" {
|
|
|
|
return database.ProvisionerJobStatusFailed
|
|
|
|
}
|
|
|
|
if isNotNull(j.CanceledAt) {
|
|
|
|
return database.ProvisionerJobStatusCanceled
|
|
|
|
}
|
|
|
|
return database.ProvisionerJobStatusSucceeded
|
|
|
|
}
|
|
|
|
|
|
|
|
if isNotNull(j.CanceledAt) {
|
|
|
|
return database.ProvisionerJobStatusCanceling
|
|
|
|
}
|
|
|
|
if isNull(j.StartedAt) {
|
|
|
|
return database.ProvisionerJobStatusPending
|
|
|
|
}
|
|
|
|
return database.ProvisionerJobStatusRunning
|
|
|
|
}
|
|
|
|
|
2023-10-30 17:42:20 +00:00
|
|
|
// isNull is only used in dbmem, so reflect is ok. Use this to make the logic
|
2023-10-05 01:57:46 +00:00
|
|
|
// look more similar to the postgres.
|
|
|
|
func isNull(v interface{}) bool {
|
|
|
|
return !isNotNull(v)
|
|
|
|
}
|
|
|
|
|
|
|
|
func isNotNull(v interface{}) bool {
|
|
|
|
return reflect.ValueOf(v).FieldByName("Valid").Bool()
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) AcquireLock(_ context.Context, _ int64) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
return xerrors.New("AcquireLock must only be called within a transaction")
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) AcquireProvisionerJob(_ context.Context, arg database.AcquireProvisionerJobParams) (database.ProvisionerJob, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.ProvisionerJob{}, err
|
|
|
|
}
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-10-11 17:50:41 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for index, provisionerJob := range q.provisionerJobs {
|
|
|
|
if provisionerJob.StartedAt.Valid {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
found := false
|
|
|
|
for _, provisionerType := range arg.Types {
|
|
|
|
if provisionerJob.Provisioner != provisionerType {
|
2023-06-05 23:12:10 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
found = true
|
|
|
|
break
|
2022-10-11 17:50:41 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if !found {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
tags := map[string]string{}
|
|
|
|
if arg.Tags != nil {
|
|
|
|
err := json.Unmarshal(arg.Tags, &tags)
|
2022-11-24 14:33:13 +00:00
|
|
|
if err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return provisionerJob, xerrors.Errorf("unmarshal: %w", err)
|
2022-11-24 14:33:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
missing := false
|
|
|
|
for key, value := range provisionerJob.Tags {
|
|
|
|
provided, found := tags[key]
|
|
|
|
if !found {
|
|
|
|
missing = true
|
|
|
|
break
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if provided != value {
|
|
|
|
missing = true
|
|
|
|
break
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if missing {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
provisionerJob.StartedAt = arg.StartedAt
|
|
|
|
provisionerJob.UpdatedAt = arg.StartedAt.Time
|
|
|
|
provisionerJob.WorkerID = arg.WorkerID
|
2023-10-05 01:57:46 +00:00
|
|
|
provisionerJob.JobStatus = provisonerJobStatus(provisionerJob)
|
2023-06-12 22:40:58 +00:00
|
|
|
q.provisionerJobs[index] = provisionerJob
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before returning, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
provisionerJob.Tags = maps.Clone(provisionerJob.Tags)
|
2023-06-12 22:40:58 +00:00
|
|
|
return provisionerJob, nil
|
|
|
|
}
|
|
|
|
return database.ProvisionerJob{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-04 15:35:33 +00:00
|
|
|
|
2023-11-15 15:42:27 +00:00
|
|
|
func (q *FakeQuerier) ActivityBumpWorkspace(ctx context.Context, arg database.ActivityBumpWorkspaceParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
2023-09-14 08:09:51 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-11-15 15:42:27 +00:00
|
|
|
workspace, err := q.getWorkspaceByIDNoLock(ctx, arg.WorkspaceID)
|
2023-09-14 08:09:51 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2023-11-15 15:42:27 +00:00
|
|
|
latestBuild, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, arg.WorkspaceID)
|
2023-09-14 08:09:51 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
now := dbtime.Now()
|
|
|
|
for i := range q.workspaceBuilds {
|
|
|
|
if q.workspaceBuilds[i].BuildNumber != latestBuild.BuildNumber {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
// If the build is not active, do not bump.
|
|
|
|
if q.workspaceBuilds[i].Transition != database.WorkspaceTransitionStart {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
// If the provisioner job is not completed, do not bump.
|
|
|
|
pj, err := q.getProvisionerJobByIDNoLock(ctx, q.workspaceBuilds[i].JobID)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if !pj.CompletedAt.Valid {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
// Do not bump if the deadline is not set.
|
|
|
|
if q.workspaceBuilds[i].Deadline.IsZero() {
|
|
|
|
return nil
|
|
|
|
}
|
2023-10-13 12:53:02 +00:00
|
|
|
|
|
|
|
// Check the template default TTL.
|
|
|
|
template, err := q.getTemplateByIDNoLock(ctx, workspace.TemplateID)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2024-02-13 07:00:35 +00:00
|
|
|
if template.ActivityBump == 0 {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
activityBump := time.Duration(template.ActivityBump)
|
2023-10-13 12:53:02 +00:00
|
|
|
|
|
|
|
var ttlDur time.Duration
|
2024-02-13 07:00:35 +00:00
|
|
|
if now.Add(activityBump).After(arg.NextAutostart) && arg.NextAutostart.After(now) {
|
|
|
|
// Extend to TTL (NOT activity bump)
|
2023-11-15 15:42:27 +00:00
|
|
|
add := arg.NextAutostart.Sub(now)
|
|
|
|
if workspace.Ttl.Valid && template.AllowUserAutostop {
|
|
|
|
add += time.Duration(workspace.Ttl.Int64)
|
|
|
|
} else {
|
|
|
|
add += time.Duration(template.DefaultTTL)
|
|
|
|
}
|
|
|
|
ttlDur = add
|
|
|
|
} else {
|
2024-02-13 07:00:35 +00:00
|
|
|
// Otherwise, default to regular activity bump duration.
|
|
|
|
ttlDur = activityBump
|
2023-10-13 12:53:02 +00:00
|
|
|
}
|
|
|
|
|
2023-09-14 08:09:51 +00:00
|
|
|
// Only bump if 5% of the deadline has passed.
|
|
|
|
ttlDur95 := ttlDur - (ttlDur / 20)
|
|
|
|
minBumpDeadline := q.workspaceBuilds[i].Deadline.Add(-ttlDur95)
|
|
|
|
if now.Before(minBumpDeadline) {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Bump.
|
|
|
|
newDeadline := now.Add(ttlDur)
|
2023-11-15 15:42:27 +00:00
|
|
|
// Never decrease deadlines from a bump
|
|
|
|
newDeadline = maxTime(newDeadline, q.workspaceBuilds[i].Deadline)
|
2023-09-14 08:09:51 +00:00
|
|
|
q.workspaceBuilds[i].UpdatedAt = now
|
|
|
|
if !q.workspaceBuilds[i].MaxDeadline.IsZero() {
|
|
|
|
q.workspaceBuilds[i].Deadline = minTime(newDeadline, q.workspaceBuilds[i].MaxDeadline)
|
|
|
|
} else {
|
|
|
|
q.workspaceBuilds[i].Deadline = newDeadline
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-09-15 14:09:40 +00:00
|
|
|
func (q *FakeQuerier) AllUserIDs(_ context.Context) ([]uuid.UUID, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
userIDs := make([]uuid.UUID, 0, len(q.users))
|
|
|
|
for idx := range q.users {
|
2023-12-15 18:30:21 +00:00
|
|
|
userIDs = append(userIDs, q.users[idx].ID)
|
2023-09-15 14:09:40 +00:00
|
|
|
}
|
|
|
|
return userIDs, nil
|
|
|
|
}
|
|
|
|
|
2023-10-10 15:52:42 +00:00
|
|
|
func (q *FakeQuerier) ArchiveUnusedTemplateVersions(_ context.Context, arg database.ArchiveUnusedTemplateVersionsParams) ([]uuid.UUID, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
type latestBuild struct {
|
|
|
|
Number int32
|
|
|
|
Version uuid.UUID
|
|
|
|
}
|
|
|
|
latest := make(map[uuid.UUID]latestBuild)
|
|
|
|
|
|
|
|
for _, b := range q.workspaceBuilds {
|
|
|
|
v, ok := latest[b.WorkspaceID]
|
|
|
|
if ok || b.BuildNumber < v.Number {
|
|
|
|
// Not the latest
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
// Ignore deleted workspaces.
|
|
|
|
if b.Transition == database.WorkspaceTransitionDelete {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
latest[b.WorkspaceID] = latestBuild{
|
|
|
|
Number: b.BuildNumber,
|
|
|
|
Version: b.TemplateVersionID,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
usedVersions := make(map[uuid.UUID]bool)
|
|
|
|
for _, l := range latest {
|
|
|
|
usedVersions[l.Version] = true
|
|
|
|
}
|
|
|
|
for _, tpl := range q.templates {
|
|
|
|
usedVersions[tpl.ActiveVersionID] = true
|
|
|
|
}
|
|
|
|
|
|
|
|
var archived []uuid.UUID
|
|
|
|
for i, v := range q.templateVersions {
|
|
|
|
if arg.TemplateVersionID != uuid.Nil {
|
|
|
|
if v.ID != arg.TemplateVersionID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if v.Archived {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if _, ok := usedVersions[v.ID]; !ok {
|
|
|
|
var job *database.ProvisionerJob
|
|
|
|
for i, j := range q.provisionerJobs {
|
|
|
|
if v.JobID == j.ID {
|
|
|
|
job = &q.provisionerJobs[i]
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if arg.JobStatus.Valid {
|
|
|
|
if job.JobStatus != arg.JobStatus.ProvisionerJobStatus {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if job.JobStatus == database.ProvisionerJobStatusRunning || job.JobStatus == database.ProvisionerJobStatusPending {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
v.Archived = true
|
|
|
|
q.templateVersions[i] = v
|
|
|
|
archived = append(archived, v.ID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return archived, nil
|
|
|
|
}
|
|
|
|
|
2024-01-16 14:06:39 +00:00
|
|
|
func (q *FakeQuerier) BatchUpdateWorkspaceLastUsedAt(_ context.Context, arg database.BatchUpdateWorkspaceLastUsedAtParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
// temporary map to avoid O(q.workspaces*arg.workspaceIds)
|
|
|
|
m := make(map[uuid.UUID]struct{})
|
|
|
|
for _, id := range arg.IDs {
|
|
|
|
m[id] = struct{}{}
|
|
|
|
}
|
|
|
|
n := 0
|
|
|
|
for i := 0; i < len(q.workspaces); i++ {
|
|
|
|
if _, found := m[q.workspaces[i].ID]; !found {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.workspaces[i].LastUsedAt = arg.LastUsedAt
|
|
|
|
n++
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) CleanTailnetCoordinators(_ context.Context) error {
|
2023-06-23 09:23:28 +00:00
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-12-01 06:02:30 +00:00
|
|
|
func (*FakeQuerier) CleanTailnetLostPeers(context.Context) error {
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) CleanTailnetTunnels(context.Context) error {
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteAPIKeyByID(_ context.Context, id string) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, apiKey := range q.apiKeys {
|
|
|
|
if apiKey.ID != id {
|
2022-10-04 15:35:33 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
q.apiKeys[index] = q.apiKeys[len(q.apiKeys)-1]
|
|
|
|
q.apiKeys = q.apiKeys[:len(q.apiKeys)-1]
|
|
|
|
return nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2022-06-14 13:46:33 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteAPIKeysByUserID(_ context.Context, userID uuid.UUID) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i := len(q.apiKeys) - 1; i >= 0; i-- {
|
|
|
|
if q.apiKeys[i].UserID == userID {
|
|
|
|
q.apiKeys = append(q.apiKeys[:i], q.apiKeys[i+1:]...)
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-09-21 19:30:48 +00:00
|
|
|
func (*FakeQuerier) DeleteAllTailnetClientSubscriptions(_ context.Context, arg database.DeleteAllTailnetClientSubscriptionsParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-11-15 06:13:27 +00:00
|
|
|
func (*FakeQuerier) DeleteAllTailnetTunnels(_ context.Context, arg database.DeleteAllTailnetTunnelsParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteApplicationConnectAPIKeysByUserID(_ context.Context, userID uuid.UUID) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i := len(q.apiKeys) - 1; i >= 0; i-- {
|
|
|
|
if q.apiKeys[i].UserID == userID && q.apiKeys[i].Scope == database.APIKeyScopeApplicationConnect {
|
|
|
|
q.apiKeys = append(q.apiKeys[:i], q.apiKeys[i+1:]...)
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) DeleteCoordinator(context.Context, uuid.UUID) error {
|
2023-06-21 12:20:58 +00:00
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-12-05 20:03:44 +00:00
|
|
|
func (q *FakeQuerier) DeleteExternalAuthLink(_ context.Context, arg database.DeleteExternalAuthLinkParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, key := range q.externalAuthLinks {
|
|
|
|
if key.UserID != arg.UserID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if key.ProviderID != arg.ProviderID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.externalAuthLinks[index] = q.externalAuthLinks[len(q.externalAuthLinks)-1]
|
|
|
|
q.externalAuthLinks = q.externalAuthLinks[:len(q.externalAuthLinks)-1]
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteGitSSHKey(_ context.Context, userID uuid.UUID) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, key := range q.gitSSHKey {
|
|
|
|
if key.UserID != userID {
|
|
|
|
continue
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
q.gitSSHKey[index] = q.gitSSHKey[len(q.gitSSHKey)-1]
|
|
|
|
q.gitSSHKey = q.gitSSHKey[:len(q.gitSSHKey)-1]
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteGroupByID(_ context.Context, id uuid.UUID) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, group := range q.groups {
|
|
|
|
if group.ID == id {
|
|
|
|
q.groups = append(q.groups[:i], q.groups[i+1:]...)
|
|
|
|
return nil
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteGroupMemberFromGroup(_ context.Context, arg database.DeleteGroupMemberFromGroupParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for i, member := range q.groupMembers {
|
|
|
|
if member.UserID == arg.UserID && member.GroupID == arg.GroupID {
|
|
|
|
q.groupMembers = append(q.groupMembers[:i], q.groupMembers[i+1:]...)
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteGroupMembersByOrgAndUser(_ context.Context, arg database.DeleteGroupMembersByOrgAndUserParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
newMembers := q.groupMembers[:0]
|
|
|
|
for _, member := range q.groupMembers {
|
|
|
|
if member.UserID != arg.UserID {
|
|
|
|
// Do not delete the other members
|
|
|
|
newMembers = append(newMembers, member)
|
|
|
|
} else if member.UserID == arg.UserID {
|
|
|
|
// We only want to delete from groups in the organization in the args.
|
|
|
|
for _, group := range q.groups {
|
|
|
|
// Find the group that the member is apartof.
|
|
|
|
if group.ID == member.GroupID {
|
|
|
|
// Only add back the member if the organization ID does not match
|
|
|
|
// the arg organization ID. Since the arg is saying which
|
|
|
|
// org to delete.
|
|
|
|
if group.OrganizationID != arg.OrganizationID {
|
|
|
|
newMembers = append(newMembers, member)
|
|
|
|
}
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
q.groupMembers = newMembers
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
2023-05-25 16:35:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteLicense(_ context.Context, id int32) (int32, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-11-16 15:16:37 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for index, l := range q.licenses {
|
|
|
|
if l.ID == id {
|
|
|
|
q.licenses[index] = q.licenses[len(q.licenses)-1]
|
|
|
|
q.licenses = q.licenses[:len(q.licenses)-1]
|
|
|
|
return id, nil
|
2022-10-13 16:41:13 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return 0, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-12-21 21:38:42 +00:00
|
|
|
func (q *FakeQuerier) DeleteOAuth2ProviderAppByID(_ context.Context, id uuid.UUID) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, app := range q.oauth2ProviderApps {
|
|
|
|
if app.ID == id {
|
|
|
|
q.oauth2ProviderApps[index] = q.oauth2ProviderApps[len(q.oauth2ProviderApps)-1]
|
|
|
|
q.oauth2ProviderApps = q.oauth2ProviderApps[:len(q.oauth2ProviderApps)-1]
|
|
|
|
|
|
|
|
secrets := []database.OAuth2ProviderAppSecret{}
|
|
|
|
for _, secret := range q.oauth2ProviderAppSecrets {
|
|
|
|
if secret.AppID != id {
|
|
|
|
secrets = append(secrets, secret)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
q.oauth2ProviderAppSecrets = secrets
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) DeleteOAuth2ProviderAppSecretByID(_ context.Context, id uuid.UUID) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, secret := range q.oauth2ProviderAppSecrets {
|
|
|
|
if secret.ID == id {
|
|
|
|
q.oauth2ProviderAppSecrets[index] = q.oauth2ProviderAppSecrets[len(q.oauth2ProviderAppSecrets)-1]
|
|
|
|
q.oauth2ProviderAppSecrets = q.oauth2ProviderAppSecrets[:len(q.oauth2ProviderAppSecrets)-1]
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-12-01 12:43:05 +00:00
|
|
|
func (q *FakeQuerier) DeleteOldProvisionerDaemons(_ context.Context) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
now := dbtime.Now()
|
|
|
|
weekInterval := 7 * 24 * time.Hour
|
|
|
|
weekAgo := now.Add(-weekInterval)
|
|
|
|
|
|
|
|
var validDaemons []database.ProvisionerDaemon
|
|
|
|
for _, p := range q.provisionerDaemons {
|
2023-12-12 11:19:28 +00:00
|
|
|
if (p.CreatedAt.Before(weekAgo) && !p.LastSeenAt.Valid) || (p.LastSeenAt.Valid && p.LastSeenAt.Time.Before(weekAgo)) {
|
2023-12-01 12:43:05 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
validDaemons = append(validDaemons, p)
|
|
|
|
}
|
|
|
|
q.provisionerDaemons = validDaemons
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-12-06 13:31:43 +00:00
|
|
|
func (q *FakeQuerier) DeleteOldWorkspaceAgentLogs(_ context.Context) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
now := dbtime.Now()
|
|
|
|
weekInterval := 7 * 24 * time.Hour
|
|
|
|
weekAgo := now.Add(-weekInterval)
|
|
|
|
|
|
|
|
var validLogs []database.WorkspaceAgentLog
|
|
|
|
for _, log := range q.workspaceAgentLogs {
|
|
|
|
var toBeDeleted bool
|
|
|
|
for _, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID == log.AgentID && agent.LastConnectedAt.Valid && agent.LastConnectedAt.Time.Before(weekAgo) {
|
|
|
|
toBeDeleted = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if !toBeDeleted {
|
|
|
|
validLogs = append(validLogs, log)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
q.workspaceAgentLogs = validLogs
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-12-07 13:08:16 +00:00
|
|
|
func (q *FakeQuerier) DeleteOldWorkspaceAgentStats(_ context.Context) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
now := dbtime.Now()
|
|
|
|
sixMonthInterval := 6 * 30 * 24 * time.Hour
|
|
|
|
sixMonthsAgo := now.Add(-sixMonthInterval)
|
|
|
|
|
|
|
|
var validStats []database.WorkspaceAgentStat
|
|
|
|
for _, stat := range q.workspaceAgentStats {
|
|
|
|
if stat.CreatedAt.Before(sixMonthsAgo) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
validStats = append(validStats, stat)
|
|
|
|
}
|
|
|
|
q.workspaceAgentStats = validStats
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) DeleteReplicasUpdatedBefore(_ context.Context, before time.Time) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, replica := range q.replicas {
|
|
|
|
if replica.UpdatedAt.Before(before) {
|
|
|
|
q.replicas = append(q.replicas[:i], q.replicas[i+1:]...)
|
2022-10-13 16:41:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
2022-11-16 15:16:37 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) DeleteTailnetAgent(context.Context, database.DeleteTailnetAgentParams) (database.DeleteTailnetAgentRow, error) {
|
2023-06-21 12:20:58 +00:00
|
|
|
return database.DeleteTailnetAgentRow{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) DeleteTailnetClient(context.Context, database.DeleteTailnetClientParams) (database.DeleteTailnetClientRow, error) {
|
2023-06-21 12:20:58 +00:00
|
|
|
return database.DeleteTailnetClientRow{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-09-21 19:30:48 +00:00
|
|
|
func (*FakeQuerier) DeleteTailnetClientSubscription(context.Context, database.DeleteTailnetClientSubscriptionParams) error {
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-11-15 06:13:27 +00:00
|
|
|
func (*FakeQuerier) DeleteTailnetPeer(_ context.Context, arg database.DeleteTailnetPeerParams) (database.DeleteTailnetPeerRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.DeleteTailnetPeerRow{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.DeleteTailnetPeerRow{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) DeleteTailnetTunnel(_ context.Context, arg database.DeleteTailnetTunnelParams) (database.DeleteTailnetTunnelRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.DeleteTailnetTunnelRow{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.DeleteTailnetTunnelRow{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2024-02-13 14:31:20 +00:00
|
|
|
func (q *FakeQuerier) DeleteWorkspaceAgentPortShare(_ context.Context, arg database.DeleteWorkspaceAgentPortShareParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, share := range q.workspaceAgentPortShares {
|
|
|
|
if share.WorkspaceID == arg.WorkspaceID && share.AgentName == arg.AgentName && share.Port == arg.Port {
|
|
|
|
q.workspaceAgentPortShares = append(q.workspaceAgentPortShares[:i], q.workspaceAgentPortShares[i+1:]...)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2024-01-24 13:39:19 +00:00
|
|
|
func (q *FakeQuerier) FavoriteWorkspace(_ context.Context, arg uuid.UUID) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i := 0; i < len(q.workspaces); i++ {
|
|
|
|
if q.workspaces[i].ID != arg {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.workspaces[i].Favorite = true
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAPIKeyByID(_ context.Context, id string) (database.APIKey, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, apiKey := range q.apiKeys {
|
|
|
|
if apiKey.ID == id {
|
|
|
|
return apiKey, nil
|
2022-11-24 14:33:13 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.APIKey{}, sql.ErrNoRows
|
2022-11-24 14:33:13 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAPIKeyByName(_ context.Context, params database.GetAPIKeyByNameParams) (database.APIKey, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
if params.TokenName == "" {
|
|
|
|
return database.APIKey{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
for _, apiKey := range q.apiKeys {
|
|
|
|
if params.UserID == apiKey.UserID && params.TokenName == apiKey.TokenName {
|
|
|
|
return apiKey, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.APIKey{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAPIKeysByLoginType(_ context.Context, t database.LoginType) ([]database.APIKey, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(t); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
apiKeys := make([]database.APIKey, 0)
|
|
|
|
for _, key := range q.apiKeys {
|
|
|
|
if key.LoginType == t {
|
|
|
|
apiKeys = append(apiKeys, key)
|
2022-11-16 15:16:37 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apiKeys, nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAPIKeysByUserID(_ context.Context, params database.GetAPIKeysByUserIDParams) ([]database.APIKey, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
apiKeys := make([]database.APIKey, 0)
|
|
|
|
for _, key := range q.apiKeys {
|
|
|
|
if key.UserID == params.UserID && key.LoginType == params.LoginType {
|
|
|
|
apiKeys = append(apiKeys, key)
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apiKeys, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAPIKeysLastUsedAfter(_ context.Context, after time.Time) ([]database.APIKey, error) {
|
2022-11-18 22:46:53 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
apiKeys := make([]database.APIKey, 0)
|
|
|
|
for _, key := range q.apiKeys {
|
|
|
|
if key.LastUsed.After(after) {
|
|
|
|
apiKeys = append(apiKeys, key)
|
2022-11-18 22:46:53 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apiKeys, nil
|
|
|
|
}
|
2022-11-18 22:46:53 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetActiveUserCount(_ context.Context) (int64, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-11-18 22:46:53 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
active := int64(0)
|
|
|
|
for _, u := range q.users {
|
|
|
|
if u.Status == database.UserStatusActive && !u.Deleted {
|
|
|
|
active++
|
2022-11-18 22:46:53 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return active, nil
|
|
|
|
}
|
2022-11-18 22:46:53 +00:00
|
|
|
|
2023-08-14 21:16:47 +00:00
|
|
|
func (q *FakeQuerier) GetActiveWorkspaceBuildsByTemplateID(ctx context.Context, templateID uuid.UUID) ([]database.WorkspaceBuild, error) {
|
|
|
|
workspaceIDs := func() []uuid.UUID {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
ids := []uuid.UUID{}
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if workspace.TemplateID == templateID {
|
|
|
|
ids = append(ids, workspace.ID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ids
|
|
|
|
}()
|
|
|
|
|
|
|
|
builds, err := q.GetLatestWorkspaceBuildsByWorkspaceIDs(ctx, workspaceIDs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
filteredBuilds := []database.WorkspaceBuild{}
|
|
|
|
for _, build := range builds {
|
|
|
|
if build.Transition == database.WorkspaceTransitionStart {
|
|
|
|
filteredBuilds = append(filteredBuilds, build)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return filteredBuilds, nil
|
|
|
|
}
|
|
|
|
|
2023-07-28 22:59:31 +00:00
|
|
|
func (*FakeQuerier) GetAllTailnetAgents(_ context.Context) ([]database.TailnetAgent, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-11-28 16:19:32 +00:00
|
|
|
func (*FakeQuerier) GetAllTailnetCoordinators(context.Context) ([]database.TailnetCoordinator, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) GetAllTailnetPeers(context.Context) ([]database.TailnetPeer, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) GetAllTailnetTunnels(context.Context) ([]database.TailnetTunnel, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAppSecurityKey(_ context.Context) (string, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-11-18 22:46:53 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.appSecurityKey, nil
|
2022-11-18 22:46:53 +00:00
|
|
|
}
|
|
|
|
|
2023-09-27 15:02:18 +00:00
|
|
|
func (q *FakeQuerier) GetApplicationName(_ context.Context) (string, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
if q.applicationName == "" {
|
|
|
|
return "", sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
return q.applicationName, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAuditLogsOffset(_ context.Context, arg database.GetAuditLogsOffsetParams) ([]database.GetAuditLogsOffsetRow, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
logs := make([]database.GetAuditLogsOffsetRow, 0, arg.Limit)
|
|
|
|
|
|
|
|
// q.auditLogs are already sorted by time DESC, so no need to sort after the fact.
|
|
|
|
for _, alog := range q.auditLogs {
|
|
|
|
if arg.Offset > 0 {
|
|
|
|
arg.Offset--
|
2022-01-25 19:52:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.Action != "" && !strings.Contains(string(alog.Action), arg.Action) {
|
2022-01-25 19:52:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.ResourceType != "" && !strings.Contains(string(alog.ResourceType), arg.ResourceType) {
|
2022-03-22 19:17:50 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.ResourceID != uuid.Nil && alog.ResourceID != arg.ResourceID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if arg.Username != "" {
|
|
|
|
user, err := q.getUserByIDNoLock(alog.UserID)
|
|
|
|
if err == nil && !strings.EqualFold(arg.Username, user.Username) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if arg.Email != "" {
|
|
|
|
user, err := q.getUserByIDNoLock(alog.UserID)
|
|
|
|
if err == nil && !strings.EqualFold(arg.Email, user.Email) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !arg.DateFrom.IsZero() {
|
|
|
|
if alog.Time.Before(arg.DateFrom) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !arg.DateTo.IsZero() {
|
|
|
|
if alog.Time.After(arg.DateTo) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if arg.BuildReason != "" {
|
|
|
|
workspaceBuild, err := q.getWorkspaceBuildByIDNoLock(context.Background(), alog.ResourceID)
|
|
|
|
if err == nil && !strings.EqualFold(arg.BuildReason, string(workspaceBuild.Reason)) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
2022-06-10 14:58:42 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
user, err := q.getUserByIDNoLock(alog.UserID)
|
|
|
|
userValid := err == nil
|
|
|
|
|
|
|
|
logs = append(logs, database.GetAuditLogsOffsetRow{
|
|
|
|
ID: alog.ID,
|
|
|
|
RequestID: alog.RequestID,
|
|
|
|
OrganizationID: alog.OrganizationID,
|
|
|
|
Ip: alog.Ip,
|
|
|
|
UserAgent: alog.UserAgent,
|
|
|
|
ResourceType: alog.ResourceType,
|
|
|
|
ResourceID: alog.ResourceID,
|
|
|
|
ResourceTarget: alog.ResourceTarget,
|
|
|
|
ResourceIcon: alog.ResourceIcon,
|
|
|
|
Action: alog.Action,
|
|
|
|
Diff: alog.Diff,
|
|
|
|
StatusCode: alog.StatusCode,
|
|
|
|
AdditionalFields: alog.AdditionalFields,
|
|
|
|
UserID: alog.UserID,
|
|
|
|
UserUsername: sql.NullString{String: user.Username, Valid: userValid},
|
|
|
|
UserEmail: sql.NullString{String: user.Email, Valid: userValid},
|
|
|
|
UserCreatedAt: sql.NullTime{Time: user.CreatedAt, Valid: userValid},
|
|
|
|
UserStatus: database.NullUserStatus{UserStatus: user.Status, Valid: userValid},
|
|
|
|
UserRoles: user.RBACRoles,
|
|
|
|
Count: 0,
|
|
|
|
})
|
|
|
|
|
|
|
|
if len(logs) >= int(arg.Limit) {
|
|
|
|
break
|
2022-06-10 14:58:42 +00:00
|
|
|
}
|
|
|
|
}
|
2022-01-25 19:52:58 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
count := int64(len(logs))
|
|
|
|
for i := range logs {
|
|
|
|
logs[i].Count = count
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return logs, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetAuthorizationUserRoles(_ context.Context, userID uuid.UUID) (database.GetAuthorizationUserRolesRow, error) {
|
2023-02-14 14:27:06 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var user *database.User
|
|
|
|
roles := make([]string, 0)
|
|
|
|
for _, u := range q.users {
|
|
|
|
if u.ID == userID {
|
|
|
|
u := u
|
|
|
|
roles = append(roles, u.RBACRoles...)
|
|
|
|
roles = append(roles, "member")
|
|
|
|
user = &u
|
|
|
|
break
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, mem := range q.organizationMembers {
|
|
|
|
if mem.UserID == userID {
|
|
|
|
roles = append(roles, mem.Roles...)
|
|
|
|
roles = append(roles, "organization-member:"+mem.OrganizationID.String())
|
|
|
|
}
|
|
|
|
}
|
2022-06-04 20:13:37 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var groups []string
|
|
|
|
for _, member := range q.groupMembers {
|
|
|
|
if member.UserID == userID {
|
|
|
|
groups = append(groups, member.GroupID.String())
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
if user == nil {
|
|
|
|
return database.GetAuthorizationUserRolesRow{}, sql.ErrNoRows
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return database.GetAuthorizationUserRolesRow{
|
|
|
|
ID: userID,
|
|
|
|
Username: user.Username,
|
|
|
|
Status: user.Status,
|
|
|
|
Roles: roles,
|
|
|
|
Groups: groups,
|
|
|
|
}, nil
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
func (q *FakeQuerier) GetDBCryptKeys(_ context.Context) ([]database.DBCryptKey, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
ks := make([]database.DBCryptKey, 0)
|
|
|
|
ks = append(ks, q.dbcryptKeys...)
|
|
|
|
return ks, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDERPMeshKey(_ context.Context) (string, error) {
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.derpMeshKey, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDefaultProxyConfig(_ context.Context) (database.GetDefaultProxyConfigRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.GetDefaultProxyConfigRow{
|
|
|
|
DisplayName: q.defaultProxyDisplayName,
|
|
|
|
IconUrl: q.defaultProxyIconURL,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDeploymentDAUs(_ context.Context, tzOffset int32) ([]database.GetDeploymentDAUsRow, error) {
|
2022-06-04 20:13:37 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
seens := make(map[time.Time]map[uuid.UUID]struct{})
|
|
|
|
|
|
|
|
for _, as := range q.workspaceAgentStats {
|
|
|
|
if as.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
date := as.CreatedAt.UTC().Add(time.Duration(tzOffset) * -1 * time.Hour).Truncate(time.Hour * 24)
|
|
|
|
|
|
|
|
dateEntry := seens[date]
|
|
|
|
if dateEntry == nil {
|
|
|
|
dateEntry = make(map[uuid.UUID]struct{})
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
dateEntry[as.UserID] = struct{}{}
|
|
|
|
seens[date] = dateEntry
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
seenKeys := maps.Keys(seens)
|
|
|
|
sort.Slice(seenKeys, func(i, j int) bool {
|
|
|
|
return seenKeys[i].Before(seenKeys[j])
|
|
|
|
})
|
2023-04-20 11:53:34 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var rs []database.GetDeploymentDAUsRow
|
|
|
|
for _, key := range seenKeys {
|
|
|
|
ids := seens[key]
|
|
|
|
for id := range ids {
|
|
|
|
rs = append(rs, database.GetDeploymentDAUsRow{
|
|
|
|
Date: key,
|
|
|
|
UserID: id,
|
|
|
|
})
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return rs, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDeploymentID(_ context.Context) (string, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.deploymentID, nil
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDeploymentWorkspaceAgentStats(_ context.Context, createdAfter time.Time) (database.GetDeploymentWorkspaceAgentStatsRow, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
agentStatsCreatedAfter := make([]database.WorkspaceAgentStat, 0)
|
|
|
|
for _, agentStat := range q.workspaceAgentStats {
|
|
|
|
if agentStat.CreatedAt.After(createdAfter) {
|
|
|
|
agentStatsCreatedAfter = append(agentStatsCreatedAfter, agentStat)
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
latestAgentStats := map[uuid.UUID]database.WorkspaceAgentStat{}
|
|
|
|
for _, agentStat := range q.workspaceAgentStats {
|
|
|
|
if agentStat.CreatedAt.After(createdAfter) {
|
|
|
|
latestAgentStats[agentStat.AgentID] = agentStat
|
|
|
|
}
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
2022-01-25 19:52:58 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
stat := database.GetDeploymentWorkspaceAgentStatsRow{}
|
|
|
|
for _, agentStat := range latestAgentStats {
|
|
|
|
stat.SessionCountVSCode += agentStat.SessionCountVSCode
|
|
|
|
stat.SessionCountJetBrains += agentStat.SessionCountJetBrains
|
|
|
|
stat.SessionCountReconnectingPTY += agentStat.SessionCountReconnectingPTY
|
|
|
|
stat.SessionCountSSH += agentStat.SessionCountSSH
|
|
|
|
}
|
2022-08-09 01:08:42 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
latencies := make([]float64, 0)
|
|
|
|
for _, agentStat := range agentStatsCreatedAfter {
|
|
|
|
if agentStat.ConnectionMedianLatencyMS <= 0 {
|
|
|
|
continue
|
2022-08-09 01:08:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
stat.WorkspaceRxBytes += agentStat.RxBytes
|
|
|
|
stat.WorkspaceTxBytes += agentStat.TxBytes
|
|
|
|
latencies = append(latencies, agentStat.ConnectionMedianLatencyMS)
|
2022-08-09 01:08:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
tryPercentile := func(fs []float64, p float64) float64 {
|
|
|
|
if len(fs) == 0 {
|
|
|
|
return -1
|
2022-08-09 01:08:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
sort.Float64s(fs)
|
|
|
|
return fs[int(float64(len(fs))*p/100)]
|
2022-08-09 01:08:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
stat.WorkspaceConnectionLatency50 = tryPercentile(latencies, 50)
|
|
|
|
stat.WorkspaceConnectionLatency95 = tryPercentile(latencies, 95)
|
|
|
|
|
|
|
|
return stat, nil
|
2022-08-09 01:08:42 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetDeploymentWorkspaceStats(ctx context.Context) (database.GetDeploymentWorkspaceStatsRow, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
stat := database.GetDeploymentWorkspaceStatsRow{}
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspace.ID)
|
|
|
|
if err != nil {
|
|
|
|
return stat, err
|
|
|
|
}
|
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, build.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return stat, err
|
|
|
|
}
|
|
|
|
if !job.StartedAt.Valid {
|
|
|
|
stat.PendingWorkspaces++
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if job.StartedAt.Valid &&
|
|
|
|
!job.CanceledAt.Valid &&
|
|
|
|
time.Since(job.UpdatedAt) <= 30*time.Second &&
|
|
|
|
!job.CompletedAt.Valid {
|
|
|
|
stat.BuildingWorkspaces++
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if job.CompletedAt.Valid &&
|
|
|
|
!job.CanceledAt.Valid &&
|
|
|
|
!job.Error.Valid {
|
|
|
|
if build.Transition == database.WorkspaceTransitionStart {
|
|
|
|
stat.RunningWorkspaces++
|
|
|
|
}
|
|
|
|
if build.Transition == database.WorkspaceTransitionStop {
|
|
|
|
stat.StoppedWorkspaces++
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if job.CanceledAt.Valid || job.Error.Valid {
|
|
|
|
stat.FailedWorkspaces++
|
|
|
|
continue
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return stat, nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
|
|
|
|
2023-09-29 19:13:20 +00:00
|
|
|
func (q *FakeQuerier) GetExternalAuthLink(_ context.Context, arg database.GetExternalAuthLinkParams) (database.ExternalAuthLink, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.ExternalAuthLink{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
for _, gitAuthLink := range q.externalAuthLinks {
|
|
|
|
if arg.UserID != gitAuthLink.UserID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if arg.ProviderID != gitAuthLink.ProviderID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
return gitAuthLink, nil
|
|
|
|
}
|
|
|
|
return database.ExternalAuthLink{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetExternalAuthLinksByUserID(_ context.Context, userID uuid.UUID) ([]database.ExternalAuthLink, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
gals := make([]database.ExternalAuthLink, 0)
|
|
|
|
for _, gal := range q.externalAuthLinks {
|
|
|
|
if gal.UserID == userID {
|
|
|
|
gals = append(gals, gal)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return gals, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetFileByHashAndCreator(_ context.Context, arg database.GetFileByHashAndCreatorParams) (database.File, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.File{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, file := range q.files {
|
|
|
|
if file.Hash == arg.Hash && file.CreatedBy == arg.CreatedBy {
|
|
|
|
return file, nil
|
2022-02-01 05:36:15 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.File{}, sql.ErrNoRows
|
|
|
|
}
|
2022-05-18 16:33:33 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetFileByID(_ context.Context, id uuid.UUID) (database.File, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-05-18 16:33:33 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, file := range q.files {
|
|
|
|
if file.ID == id {
|
|
|
|
return file, nil
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.File{}, sql.ErrNoRows
|
|
|
|
}
|
2022-05-18 16:33:33 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetFileTemplates(_ context.Context, id uuid.UUID) ([]database.GetFileTemplatesRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
rows := make([]database.GetFileTemplatesRow, 0)
|
|
|
|
var file database.File
|
|
|
|
for _, f := range q.files {
|
|
|
|
if f.ID == id {
|
|
|
|
file = f
|
|
|
|
break
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
if file.Hash == "" {
|
|
|
|
return rows, nil
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, job := range q.provisionerJobs {
|
|
|
|
if job.FileID == id {
|
|
|
|
for _, version := range q.templateVersions {
|
|
|
|
if version.JobID == job.ID {
|
|
|
|
for _, template := range q.templates {
|
|
|
|
if template.ID == version.TemplateID.UUID {
|
|
|
|
rows = append(rows, database.GetFileTemplatesRow{
|
|
|
|
FileID: file.ID,
|
|
|
|
FileCreatedBy: file.CreatedBy,
|
|
|
|
TemplateID: template.ID,
|
|
|
|
TemplateOrganizationID: template.OrganizationID,
|
|
|
|
TemplateCreatedBy: template.CreatedBy,
|
|
|
|
UserACL: template.UserACL,
|
|
|
|
GroupACL: template.GroupACL,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2022-05-18 16:33:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return rows, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetGitSSHKey(_ context.Context, userID uuid.UUID) (database.GitSSHKey, error) {
|
2023-01-17 10:22:11 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, key := range q.gitSSHKey {
|
|
|
|
if key.UserID == userID {
|
|
|
|
return key, nil
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.GitSSHKey{}, sql.ErrNoRows
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetGroupByID(ctx context.Context, id uuid.UUID) (database.Group, error) {
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.getGroupByIDNoLock(ctx, id)
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetGroupByOrgAndName(_ context.Context, arg database.GetGroupByOrgAndNameParams) (database.Group, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.Group{}, err
|
2022-04-23 22:58:57 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, group := range q.groups {
|
|
|
|
if group.OrganizationID == arg.OrganizationID &&
|
|
|
|
group.Name == arg.Name {
|
|
|
|
return group, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return database.Group{}, sql.ErrNoRows
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-08-17 18:25:16 +00:00
|
|
|
func (q *FakeQuerier) GetGroupMembers(_ context.Context, id uuid.UUID) ([]database.User, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-08-17 18:25:16 +00:00
|
|
|
if q.isEveryoneGroup(id) {
|
2023-08-23 16:54:16 +00:00
|
|
|
return q.getEveryoneGroupMembersNoLock(id), nil
|
2023-08-17 18:25:16 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var members []database.GroupMember
|
|
|
|
for _, member := range q.groupMembers {
|
2023-08-17 18:25:16 +00:00
|
|
|
if member.GroupID == id {
|
2023-06-12 22:40:58 +00:00
|
|
|
members = append(members, member)
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
users := make([]database.User, 0, len(members))
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, member := range members {
|
|
|
|
for _, user := range q.users {
|
2023-10-20 15:36:00 +00:00
|
|
|
if user.ID == member.UserID && !user.Deleted {
|
2023-06-12 22:40:58 +00:00
|
|
|
users = append(users, user)
|
|
|
|
break
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return users, nil
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
|
|
|
|
2023-08-17 18:25:16 +00:00
|
|
|
func (q *FakeQuerier) GetGroupsByOrganizationID(_ context.Context, id uuid.UUID) ([]database.Group, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-08-17 18:25:16 +00:00
|
|
|
groups := make([]database.Group, 0, len(q.groups))
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, group := range q.groups {
|
2023-08-17 18:25:16 +00:00
|
|
|
if group.OrganizationID == id {
|
2023-06-12 22:40:58 +00:00
|
|
|
groups = append(groups, group)
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return groups, nil
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-11-23 16:18:12 +00:00
|
|
|
func (q *FakeQuerier) GetHealthSettings(_ context.Context) (string, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
if q.healthSettings == nil {
|
|
|
|
return "{}", nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return string(q.healthSettings), nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetHungProvisionerJobs(_ context.Context, hungSince time.Time) ([]database.ProvisionerJob, error) {
|
2023-06-25 13:17:00 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
hungJobs := []database.ProvisionerJob{}
|
|
|
|
for _, provisionerJob := range q.provisionerJobs {
|
|
|
|
if provisionerJob.StartedAt.Valid && !provisionerJob.CompletedAt.Valid && provisionerJob.UpdatedAt.Before(hungSince) {
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before appending, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
provisionerJob.Tags = maps.Clone(provisionerJob.Tags)
|
2023-06-25 13:17:00 +00:00
|
|
|
hungJobs = append(hungJobs, provisionerJob)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return hungJobs, nil
|
|
|
|
}
|
|
|
|
|
2024-01-30 01:30:02 +00:00
|
|
|
func (q *FakeQuerier) GetJFrogXrayScanByWorkspaceAndAgentID(_ context.Context, arg database.GetJFrogXrayScanByWorkspaceAndAgentIDParams) (database.JfrogXrayScan, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.JfrogXrayScan{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, scan := range q.jfrogXRayScans {
|
|
|
|
if scan.AgentID == arg.AgentID && scan.WorkspaceID == arg.WorkspaceID {
|
|
|
|
return scan, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.JfrogXrayScan{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLastUpdateCheck(_ context.Context) (string, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if q.lastUpdateCheck == nil {
|
|
|
|
return "", sql.ErrNoRows
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return string(q.lastUpdateCheck), nil
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLatestWorkspaceBuildByWorkspaceID(ctx context.Context, workspaceID uuid.UUID) (database.WorkspaceBuild, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-03-07 14:14:58 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspaceID)
|
2023-03-07 14:14:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLatestWorkspaceBuilds(_ context.Context) ([]database.WorkspaceBuild, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-03-07 14:14:58 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
builds := make(map[uuid.UUID]database.WorkspaceBuild)
|
|
|
|
buildNumbers := make(map[uuid.UUID]int32)
|
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
id := workspaceBuild.WorkspaceID
|
|
|
|
if workspaceBuild.BuildNumber > buildNumbers[id] {
|
2023-07-25 13:14:38 +00:00
|
|
|
builds[id] = q.workspaceBuildWithUserNoLock(workspaceBuild)
|
2023-06-12 22:40:58 +00:00
|
|
|
buildNumbers[id] = workspaceBuild.BuildNumber
|
2023-03-07 14:14:58 +00:00
|
|
|
}
|
2022-06-08 14:14:57 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
var returnBuilds []database.WorkspaceBuild
|
|
|
|
for i, n := range buildNumbers {
|
|
|
|
if n > 0 {
|
|
|
|
b := builds[i]
|
|
|
|
returnBuilds = append(returnBuilds, b)
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(returnBuilds) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return returnBuilds, nil
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLatestWorkspaceBuildsByWorkspaceIDs(_ context.Context, ids []uuid.UUID) ([]database.WorkspaceBuild, error) {
|
2022-06-14 13:46:33 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
builds := make(map[uuid.UUID]database.WorkspaceBuild)
|
|
|
|
buildNumbers := make(map[uuid.UUID]int32)
|
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
for _, id := range ids {
|
|
|
|
if id == workspaceBuild.WorkspaceID && workspaceBuild.BuildNumber > buildNumbers[id] {
|
2023-07-25 13:14:38 +00:00
|
|
|
builds[id] = q.workspaceBuildWithUserNoLock(workspaceBuild)
|
2023-06-12 22:40:58 +00:00
|
|
|
buildNumbers[id] = workspaceBuild.BuildNumber
|
|
|
|
}
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
var returnBuilds []database.WorkspaceBuild
|
|
|
|
for i, n := range buildNumbers {
|
|
|
|
if n > 0 {
|
|
|
|
b := builds[i]
|
|
|
|
returnBuilds = append(returnBuilds, b)
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(returnBuilds) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return returnBuilds, nil
|
2022-06-14 13:46:33 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLicenseByID(_ context.Context, id int32) (database.License, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, license := range q.licenses {
|
|
|
|
if license.ID == id {
|
|
|
|
return license, nil
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.License{}, sql.ErrNoRows
|
|
|
|
}
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLicenses(_ context.Context) ([]database.License, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
results := append([]database.License{}, q.licenses...)
|
|
|
|
sort.Slice(results, func(i, j int) bool { return results[i].ID < results[j].ID })
|
|
|
|
return results, nil
|
|
|
|
}
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetLogoURL(_ context.Context) (string, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if q.logoURL == "" {
|
|
|
|
return "", sql.ErrNoRows
|
2022-05-10 07:44:09 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.logoURL, nil
|
|
|
|
}
|
|
|
|
|
2023-12-21 21:38:42 +00:00
|
|
|
func (q *FakeQuerier) GetOAuth2ProviderAppByID(_ context.Context, id uuid.UUID) (database.OAuth2ProviderApp, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, app := range q.oauth2ProviderApps {
|
|
|
|
if app.ID == id {
|
|
|
|
return app, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.OAuth2ProviderApp{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetOAuth2ProviderAppSecretByID(_ context.Context, id uuid.UUID) (database.OAuth2ProviderAppSecret, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, secret := range q.oauth2ProviderAppSecrets {
|
|
|
|
if secret.ID == id {
|
|
|
|
return secret, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.OAuth2ProviderAppSecret{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetOAuth2ProviderAppSecretsByAppID(_ context.Context, appID uuid.UUID) ([]database.OAuth2ProviderAppSecret, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, app := range q.oauth2ProviderApps {
|
|
|
|
if app.ID == appID {
|
|
|
|
secrets := []database.OAuth2ProviderAppSecret{}
|
|
|
|
for _, secret := range q.oauth2ProviderAppSecrets {
|
|
|
|
if secret.AppID == appID {
|
|
|
|
secrets = append(secrets, secret)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
slices.SortFunc(secrets, func(a, b database.OAuth2ProviderAppSecret) int {
|
|
|
|
if a.CreatedAt.Before(b.CreatedAt) {
|
|
|
|
return -1
|
|
|
|
} else if a.CreatedAt.Equal(b.CreatedAt) {
|
|
|
|
return 0
|
|
|
|
}
|
|
|
|
return 1
|
|
|
|
})
|
|
|
|
return secrets, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return []database.OAuth2ProviderAppSecret{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetOAuth2ProviderApps(_ context.Context) ([]database.OAuth2ProviderApp, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
slices.SortFunc(q.oauth2ProviderApps, func(a, b database.OAuth2ProviderApp) int {
|
|
|
|
return slice.Ascending(a.Name, b.Name)
|
|
|
|
})
|
|
|
|
return q.oauth2ProviderApps, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOAuthSigningKey(_ context.Context) (string, error) {
|
2023-06-30 12:38:48 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.oauthSigningKey, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationByID(_ context.Context, id uuid.UUID) (database.Organization, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, organization := range q.organizations {
|
|
|
|
if organization.ID == id {
|
|
|
|
return organization, nil
|
2022-05-10 07:44:09 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.Organization{}, sql.ErrNoRows
|
|
|
|
}
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationByName(_ context.Context, name string) (database.Organization, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-05-10 07:44:09 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, organization := range q.organizations {
|
|
|
|
if organization.Name == name {
|
|
|
|
return organization, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.Organization{}, sql.ErrNoRows
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationIDsByMemberIDs(_ context.Context, ids []uuid.UUID) ([]database.GetOrganizationIDsByMemberIDsRow, error) {
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
getOrganizationIDsByMemberIDRows := make([]database.GetOrganizationIDsByMemberIDsRow, 0, len(ids))
|
|
|
|
for _, userID := range ids {
|
|
|
|
userOrganizationIDs := make([]uuid.UUID, 0)
|
|
|
|
for _, membership := range q.organizationMembers {
|
|
|
|
if membership.UserID == userID {
|
|
|
|
userOrganizationIDs = append(userOrganizationIDs, membership.OrganizationID)
|
|
|
|
}
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
getOrganizationIDsByMemberIDRows = append(getOrganizationIDsByMemberIDRows, database.GetOrganizationIDsByMemberIDsRow{
|
|
|
|
UserID: userID,
|
|
|
|
OrganizationIDs: userOrganizationIDs,
|
|
|
|
})
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(getOrganizationIDsByMemberIDRows) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
return getOrganizationIDsByMemberIDRows, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationMemberByUserID(_ context.Context, arg database.GetOrganizationMemberByUserIDParams) (database.OrganizationMember, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.OrganizationMember{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, organizationMember := range q.organizationMembers {
|
|
|
|
if organizationMember.OrganizationID != arg.OrganizationID {
|
2022-02-01 05:36:15 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if organizationMember.UserID != arg.UserID {
|
2022-11-15 16:24:13 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return organizationMember, nil
|
2022-11-15 16:24:13 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.OrganizationMember{}, sql.ErrNoRows
|
2022-11-15 16:24:13 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationMembershipsByUserID(_ context.Context, userID uuid.UUID) ([]database.OrganizationMember, error) {
|
2023-01-17 10:22:11 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var memberships []database.OrganizationMember
|
|
|
|
for _, organizationMember := range q.organizationMembers {
|
|
|
|
mem := organizationMember
|
|
|
|
if mem.UserID != userID {
|
2023-01-17 10:22:11 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
memberships = append(memberships, mem)
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return memberships, nil
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizations(_ context.Context) ([]database.Organization, error) {
|
2023-02-15 17:24:15 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(q.organizations) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
2023-02-15 17:24:15 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.organizations, nil
|
2023-02-15 17:24:15 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetOrganizationsByUserID(_ context.Context, userID uuid.UUID) ([]database.Organization, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
organizations := make([]database.Organization, 0)
|
|
|
|
for _, organizationMember := range q.organizationMembers {
|
|
|
|
if organizationMember.UserID != userID {
|
2022-01-25 19:52:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, organization := range q.organizations {
|
|
|
|
if organization.ID != organizationMember.OrganizationID {
|
|
|
|
continue
|
2022-11-28 19:53:56 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
organizations = append(organizations, organization)
|
2022-11-28 19:53:56 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(organizations) == 0 {
|
2022-11-28 19:53:56 +00:00
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return organizations, nil
|
2022-11-28 19:53:56 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetParameterSchemasByJobID(_ context.Context, jobID uuid.UUID) ([]database.ParameterSchema, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
parameters := make([]database.ParameterSchema, 0)
|
|
|
|
for _, parameterSchema := range q.parameterSchemas {
|
|
|
|
if parameterSchema.JobID != jobID {
|
2022-03-22 19:17:50 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
parameters = append(parameters, parameterSchema)
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(parameters) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
sort.Slice(parameters, func(i, j int) bool {
|
|
|
|
return parameters[i].Index < parameters[j].Index
|
|
|
|
})
|
|
|
|
return parameters, nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetPreviousTemplateVersion(_ context.Context, arg database.GetPreviousTemplateVersionParams) (database.TemplateVersion, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.TemplateVersion{}, err
|
|
|
|
}
|
|
|
|
|
2022-12-06 14:15:03 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
var currentTemplateVersion database.TemplateVersion
|
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.TemplateID != arg.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if templateVersion.Name != arg.Name {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if templateVersion.OrganizationID != arg.OrganizationID {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
currentTemplateVersion = q.templateVersionWithUserNoLock(templateVersion)
|
2022-12-06 14:15:03 +00:00
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
previousTemplateVersions := make([]database.TemplateVersion, 0)
|
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.ID == currentTemplateVersion.ID {
|
|
|
|
continue
|
|
|
|
}
|
2022-12-08 15:24:15 +00:00
|
|
|
if templateVersion.OrganizationID != arg.OrganizationID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if templateVersion.TemplateID != currentTemplateVersion.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2022-12-06 14:15:03 +00:00
|
|
|
if templateVersion.CreatedAt.Before(currentTemplateVersion.CreatedAt) {
|
2023-07-25 13:14:38 +00:00
|
|
|
previousTemplateVersions = append(previousTemplateVersions, q.templateVersionWithUserNoLock(templateVersion))
|
2022-12-06 14:15:03 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
if len(previousTemplateVersions) == 0 {
|
|
|
|
return database.TemplateVersion{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
sort.Slice(previousTemplateVersions, func(i, j int) bool {
|
|
|
|
return previousTemplateVersions[i].CreatedAt.After(previousTemplateVersions[j].CreatedAt)
|
|
|
|
})
|
|
|
|
|
|
|
|
return previousTemplateVersions[0], nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerDaemons(_ context.Context) ([]database.ProvisionerDaemon, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
if len(q.provisionerDaemons) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
2023-12-14 18:23:29 +00:00
|
|
|
// copy the data so that the caller can't manipulate any data inside dbmem
|
|
|
|
// after returning
|
|
|
|
out := make([]database.ProvisionerDaemon, len(q.provisionerDaemons))
|
|
|
|
copy(out, q.provisionerDaemons)
|
|
|
|
for i := range out {
|
|
|
|
// maps are reference types, so we need to clone them
|
|
|
|
out[i].Tags = maps.Clone(out[i].Tags)
|
|
|
|
}
|
|
|
|
return out, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerJobByID(ctx context.Context, id uuid.UUID) (database.ProvisionerJob, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.getProvisionerJobByIDNoLock(ctx, id)
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerJobsByIDs(_ context.Context, ids []uuid.UUID) ([]database.ProvisionerJob, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
jobs := make([]database.ProvisionerJob, 0)
|
|
|
|
for _, job := range q.provisionerJobs {
|
|
|
|
for _, id := range ids {
|
|
|
|
if id == job.ID {
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before appending, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
job.Tags = maps.Clone(job.Tags)
|
2023-06-12 22:40:58 +00:00
|
|
|
jobs = append(jobs, job)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if len(jobs) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
2022-12-06 14:15:03 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return jobs, nil
|
2022-12-06 14:15:03 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerJobsByIDsWithQueuePosition(_ context.Context, ids []uuid.UUID) ([]database.GetProvisionerJobsByIDsWithQueuePositionRow, error) {
|
2023-06-20 20:07:18 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
jobs := make([]database.GetProvisionerJobsByIDsWithQueuePositionRow, 0)
|
|
|
|
queuePosition := int64(1)
|
|
|
|
for _, job := range q.provisionerJobs {
|
|
|
|
for _, id := range ids {
|
|
|
|
if id == job.ID {
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before appending, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
job.Tags = maps.Clone(job.Tags)
|
2023-06-20 20:07:18 +00:00
|
|
|
job := database.GetProvisionerJobsByIDsWithQueuePositionRow{
|
|
|
|
ProvisionerJob: job,
|
|
|
|
}
|
|
|
|
if !job.ProvisionerJob.StartedAt.Valid {
|
|
|
|
job.QueuePosition = queuePosition
|
|
|
|
}
|
|
|
|
jobs = append(jobs, job)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !job.StartedAt.Valid {
|
|
|
|
queuePosition++
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for _, job := range jobs {
|
|
|
|
if !job.ProvisionerJob.StartedAt.Valid {
|
|
|
|
// Set it to the max position!
|
|
|
|
job.QueueSize = queuePosition
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return jobs, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerJobsCreatedAfter(_ context.Context, after time.Time) ([]database.ProvisionerJob, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
jobs := make([]database.ProvisionerJob, 0)
|
|
|
|
for _, job := range q.provisionerJobs {
|
|
|
|
if job.CreatedAt.After(after) {
|
2023-12-14 18:23:29 +00:00
|
|
|
// clone the Tags before appending, since maps are reference types and
|
|
|
|
// we don't want the caller to be able to mutate the map we have inside
|
|
|
|
// dbmem!
|
|
|
|
job.Tags = maps.Clone(job.Tags)
|
2023-06-12 22:40:58 +00:00
|
|
|
jobs = append(jobs, job)
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return jobs, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetProvisionerLogsAfterID(_ context.Context, arg database.GetProvisionerLogsAfterIDParams) ([]database.ProvisionerJobLog, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
logs := make([]database.ProvisionerJobLog, 0)
|
|
|
|
for _, jobLog := range q.provisionerJobLogs {
|
|
|
|
if jobLog.JobID != arg.JobID {
|
|
|
|
continue
|
2022-07-27 15:04:29 +00:00
|
|
|
}
|
2023-06-25 13:17:00 +00:00
|
|
|
if jobLog.ID <= arg.CreatedAfter {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
logs = append(logs, jobLog)
|
|
|
|
}
|
|
|
|
return logs, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetQuotaAllowanceForUser(_ context.Context, userID uuid.UUID) (int64, error) {
|
2022-10-10 20:37:06 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var sum int64
|
|
|
|
for _, member := range q.groupMembers {
|
|
|
|
if member.UserID != userID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
for _, group := range q.groups {
|
|
|
|
if group.ID == member.GroupID {
|
|
|
|
sum += int64(group.QuotaAllowance)
|
2023-08-17 18:25:16 +00:00
|
|
|
continue
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
}
|
2023-08-17 18:25:16 +00:00
|
|
|
// Grab the quota for the Everyone group.
|
|
|
|
for _, group := range q.groups {
|
|
|
|
if group.ID == group.OrganizationID {
|
|
|
|
sum += int64(group.QuotaAllowance)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return sum, nil
|
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetQuotaConsumedForUser(_ context.Context, userID uuid.UUID) (int64, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var sum int64
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if workspace.OwnerID != userID {
|
2022-10-10 20:37:06 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if workspace.Deleted {
|
2022-10-10 20:37:06 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
var lastBuild database.WorkspaceBuildTable
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, build := range q.workspaceBuilds {
|
|
|
|
if build.WorkspaceID != workspace.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if build.CreatedAt.After(lastBuild.CreatedAt) {
|
|
|
|
lastBuild = build
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sum += int64(lastBuild.DailyCost)
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return sum, nil
|
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-26 16:21:04 +00:00
|
|
|
func (q *FakeQuerier) GetReplicaByID(_ context.Context, id uuid.UUID) (database.Replica, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, replica := range q.replicas {
|
|
|
|
if replica.ID == id {
|
|
|
|
return replica, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.Replica{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetReplicasUpdatedAfter(_ context.Context, updatedAt time.Time) ([]database.Replica, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
replicas := make([]database.Replica, 0)
|
|
|
|
for _, replica := range q.replicas {
|
|
|
|
if replica.UpdatedAt.After(updatedAt) && !replica.StoppedAt.Valid {
|
|
|
|
replicas = append(replicas, replica)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return replicas, nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetServiceBanner(_ context.Context) (string, error) {
|
2022-10-10 20:37:06 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if q.serviceBanner == nil {
|
|
|
|
return "", sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return string(q.serviceBanner), nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) GetTailnetAgents(context.Context, uuid.UUID) ([]database.TailnetAgent, error) {
|
2023-06-21 12:20:58 +00:00
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (*FakeQuerier) GetTailnetClientsForAgent(context.Context, uuid.UUID) ([]database.TailnetClient, error) {
|
2023-06-21 12:20:58 +00:00
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-11-15 06:13:27 +00:00
|
|
|
func (*FakeQuerier) GetTailnetPeers(context.Context, uuid.UUID) ([]database.TailnetPeer, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) GetTailnetTunnelPeerBindings(context.Context, uuid.UUID) ([]database.GetTailnetTunnelPeerBindingsRow, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) GetTailnetTunnelPeerIDs(context.Context, uuid.UUID) ([]database.GetTailnetTunnelPeerIDsRow, error) {
|
|
|
|
return nil, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateAppInsights(ctx context.Context, arg database.GetTemplateAppInsightsParams) ([]database.GetTemplateAppInsightsRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
type appKey struct {
|
|
|
|
AccessMethod string
|
|
|
|
SlugOrPort string
|
|
|
|
Slug string
|
|
|
|
DisplayName string
|
|
|
|
Icon string
|
|
|
|
}
|
|
|
|
type uniqueKey struct {
|
|
|
|
TemplateID uuid.UUID
|
|
|
|
UserID uuid.UUID
|
|
|
|
AgentID uuid.UUID
|
|
|
|
AppKey appKey
|
|
|
|
}
|
|
|
|
|
|
|
|
appUsageIntervalsByUserAgentApp := make(map[uniqueKey]map[time.Time]int64)
|
|
|
|
for _, s := range q.workspaceAppStats {
|
|
|
|
// (was.session_started_at >= ts.from_ AND was.session_started_at < ts.to_)
|
|
|
|
// OR (was.session_ended_at > ts.from_ AND was.session_ended_at < ts.to_)
|
|
|
|
// OR (was.session_started_at < ts.from_ AND was.session_ended_at >= ts.to_)
|
|
|
|
if !(((s.SessionStartedAt.After(arg.StartTime) || s.SessionStartedAt.Equal(arg.StartTime)) && s.SessionStartedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionEndedAt.After(arg.StartTime) && s.SessionEndedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionStartedAt.Before(arg.StartTime) && (s.SessionEndedAt.After(arg.EndTime) || s.SessionEndedAt.Equal(arg.EndTime)))) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
w, err := q.getWorkspaceByIDNoLock(ctx, s.WorkspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2023-08-24 10:36:40 +00:00
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, w.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
app, _ := q.getWorkspaceAppByAgentIDAndSlugNoLock(ctx, database.GetWorkspaceAppByAgentIDAndSlugParams{
|
|
|
|
AgentID: s.AgentID,
|
|
|
|
Slug: s.SlugOrPort,
|
|
|
|
})
|
|
|
|
|
|
|
|
key := uniqueKey{
|
|
|
|
TemplateID: w.TemplateID,
|
|
|
|
UserID: s.UserID,
|
|
|
|
AgentID: s.AgentID,
|
|
|
|
AppKey: appKey{
|
|
|
|
AccessMethod: s.AccessMethod,
|
|
|
|
SlugOrPort: s.SlugOrPort,
|
|
|
|
Slug: app.Slug,
|
|
|
|
DisplayName: app.DisplayName,
|
|
|
|
Icon: app.Icon,
|
|
|
|
},
|
|
|
|
}
|
|
|
|
if appUsageIntervalsByUserAgentApp[key] == nil {
|
|
|
|
appUsageIntervalsByUserAgentApp[key] = make(map[time.Time]int64)
|
|
|
|
}
|
|
|
|
|
|
|
|
t := s.SessionStartedAt.Truncate(5 * time.Minute)
|
|
|
|
if t.Before(arg.StartTime) {
|
|
|
|
t = arg.StartTime
|
|
|
|
}
|
|
|
|
for t.Before(s.SessionEndedAt) && t.Before(arg.EndTime) {
|
2023-08-24 14:34:38 +00:00
|
|
|
appUsageIntervalsByUserAgentApp[key][t] = 60 // 1 minute.
|
|
|
|
t = t.Add(1 * time.Minute)
|
2023-08-21 12:08:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
appUsageTemplateIDs := make(map[appKey]map[uuid.UUID]struct{})
|
|
|
|
appUsageUserIDs := make(map[appKey]map[uuid.UUID]struct{})
|
|
|
|
appUsage := make(map[appKey]int64)
|
|
|
|
for uniqueKey, usage := range appUsageIntervalsByUserAgentApp {
|
|
|
|
for _, seconds := range usage {
|
|
|
|
if appUsageTemplateIDs[uniqueKey.AppKey] == nil {
|
|
|
|
appUsageTemplateIDs[uniqueKey.AppKey] = make(map[uuid.UUID]struct{})
|
|
|
|
}
|
|
|
|
appUsageTemplateIDs[uniqueKey.AppKey][uniqueKey.TemplateID] = struct{}{}
|
|
|
|
if appUsageUserIDs[uniqueKey.AppKey] == nil {
|
|
|
|
appUsageUserIDs[uniqueKey.AppKey] = make(map[uuid.UUID]struct{})
|
|
|
|
}
|
|
|
|
appUsageUserIDs[uniqueKey.AppKey][uniqueKey.UserID] = struct{}{}
|
|
|
|
appUsage[uniqueKey.AppKey] += seconds
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
var rows []database.GetTemplateAppInsightsRow
|
|
|
|
for appKey, usage := range appUsage {
|
|
|
|
templateIDs := make([]uuid.UUID, 0, len(appUsageTemplateIDs[appKey]))
|
|
|
|
for templateID := range appUsageTemplateIDs[appKey] {
|
|
|
|
templateIDs = append(templateIDs, templateID)
|
|
|
|
}
|
|
|
|
slices.SortFunc(templateIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
|
|
|
})
|
|
|
|
activeUserIDs := make([]uuid.UUID, 0, len(appUsageUserIDs[appKey]))
|
|
|
|
for userID := range appUsageUserIDs[appKey] {
|
|
|
|
activeUserIDs = append(activeUserIDs, userID)
|
|
|
|
}
|
|
|
|
slices.SortFunc(activeUserIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
|
|
|
})
|
|
|
|
|
|
|
|
rows = append(rows, database.GetTemplateAppInsightsRow{
|
|
|
|
TemplateIDs: templateIDs,
|
|
|
|
ActiveUserIDs: activeUserIDs,
|
|
|
|
AccessMethod: appKey.AccessMethod,
|
|
|
|
SlugOrPort: appKey.SlugOrPort,
|
|
|
|
DisplayName: sql.NullString{String: appKey.DisplayName, Valid: appKey.DisplayName != ""},
|
|
|
|
Icon: sql.NullString{String: appKey.Icon, Valid: appKey.Icon != ""},
|
|
|
|
IsApp: appKey.Slug != "",
|
|
|
|
UsageSeconds: usage,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2023-08-24 10:36:40 +00:00
|
|
|
// NOTE(mafredri): Add sorting if we decide on how to handle PostgreSQL collations.
|
|
|
|
// ORDER BY access_method, slug_or_port, display_name, icon, is_app
|
2023-08-21 12:08:58 +00:00
|
|
|
return rows, nil
|
|
|
|
}
|
|
|
|
|
2023-11-07 16:14:59 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateAppInsightsByTemplate(ctx context.Context, arg database.GetTemplateAppInsightsByTemplateParams) ([]database.GetTemplateAppInsightsByTemplateRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
type uniqueKey struct {
|
|
|
|
TemplateID uuid.UUID
|
|
|
|
DisplayName string
|
|
|
|
Slug string
|
|
|
|
}
|
|
|
|
|
|
|
|
// map (TemplateID + DisplayName + Slug) x time.Time x UserID x <usage>
|
|
|
|
usageByTemplateAppUser := map[uniqueKey]map[time.Time]map[uuid.UUID]int64{}
|
|
|
|
|
|
|
|
// Review agent stats in terms of usage
|
|
|
|
for _, s := range q.workspaceAppStats {
|
|
|
|
// (was.session_started_at >= ts.from_ AND was.session_started_at < ts.to_)
|
|
|
|
// OR (was.session_ended_at > ts.from_ AND was.session_ended_at < ts.to_)
|
|
|
|
// OR (was.session_started_at < ts.from_ AND was.session_ended_at >= ts.to_)
|
|
|
|
if !(((s.SessionStartedAt.After(arg.StartTime) || s.SessionStartedAt.Equal(arg.StartTime)) && s.SessionStartedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionEndedAt.After(arg.StartTime) && s.SessionEndedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionStartedAt.Before(arg.StartTime) && (s.SessionEndedAt.After(arg.EndTime) || s.SessionEndedAt.Equal(arg.EndTime)))) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
w, err := q.getWorkspaceByIDNoLock(ctx, s.WorkspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
app, _ := q.getWorkspaceAppByAgentIDAndSlugNoLock(ctx, database.GetWorkspaceAppByAgentIDAndSlugParams{
|
|
|
|
AgentID: s.AgentID,
|
|
|
|
Slug: s.SlugOrPort,
|
|
|
|
})
|
|
|
|
|
|
|
|
key := uniqueKey{
|
|
|
|
TemplateID: w.TemplateID,
|
|
|
|
DisplayName: app.DisplayName,
|
|
|
|
Slug: app.Slug,
|
|
|
|
}
|
|
|
|
|
|
|
|
t := s.SessionStartedAt.Truncate(time.Minute)
|
|
|
|
if t.Before(arg.StartTime) {
|
|
|
|
t = arg.StartTime
|
|
|
|
}
|
|
|
|
for t.Before(s.SessionEndedAt) && t.Before(arg.EndTime) {
|
|
|
|
if _, ok := usageByTemplateAppUser[key]; !ok {
|
|
|
|
usageByTemplateAppUser[key] = map[time.Time]map[uuid.UUID]int64{}
|
|
|
|
}
|
|
|
|
if _, ok := usageByTemplateAppUser[key][t]; !ok {
|
|
|
|
usageByTemplateAppUser[key][t] = map[uuid.UUID]int64{}
|
|
|
|
}
|
|
|
|
if _, ok := usageByTemplateAppUser[key][t][s.UserID]; !ok {
|
|
|
|
usageByTemplateAppUser[key][t][s.UserID] = 60 // 1 minute
|
|
|
|
}
|
|
|
|
t = t.Add(1 * time.Minute)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sort usage data
|
|
|
|
usageKeys := make([]uniqueKey, len(usageByTemplateAppUser))
|
|
|
|
var i int
|
|
|
|
for key := range usageByTemplateAppUser {
|
|
|
|
usageKeys[i] = key
|
|
|
|
i++
|
|
|
|
}
|
|
|
|
|
|
|
|
slices.SortFunc(usageKeys, func(a, b uniqueKey) int {
|
|
|
|
if a.TemplateID != b.TemplateID {
|
|
|
|
return slice.Ascending(a.TemplateID.String(), b.TemplateID.String())
|
|
|
|
}
|
|
|
|
if a.DisplayName != b.DisplayName {
|
|
|
|
return slice.Ascending(a.DisplayName, b.DisplayName)
|
|
|
|
}
|
|
|
|
return slice.Ascending(a.Slug, b.Slug)
|
|
|
|
})
|
|
|
|
|
|
|
|
// Build result
|
|
|
|
var result []database.GetTemplateAppInsightsByTemplateRow
|
|
|
|
for _, usageKey := range usageKeys {
|
|
|
|
r := database.GetTemplateAppInsightsByTemplateRow{
|
|
|
|
TemplateID: usageKey.TemplateID,
|
|
|
|
DisplayName: sql.NullString{String: usageKey.DisplayName, Valid: true},
|
|
|
|
SlugOrPort: usageKey.Slug,
|
|
|
|
}
|
|
|
|
for _, mUserUsage := range usageByTemplateAppUser[usageKey] {
|
|
|
|
r.ActiveUsers += int64(len(mUserUsage))
|
|
|
|
for _, usage := range mUserUsage {
|
|
|
|
r.UsageSeconds += usage
|
|
|
|
}
|
|
|
|
}
|
|
|
|
result = append(result, r)
|
|
|
|
}
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateAverageBuildTime(ctx context.Context, arg database.GetTemplateAverageBuildTimeParams) (database.GetTemplateAverageBuildTimeRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.GetTemplateAverageBuildTimeRow{}, err
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var emptyRow database.GetTemplateAverageBuildTimeRow
|
|
|
|
var (
|
|
|
|
startTimes []float64
|
|
|
|
stopTimes []float64
|
|
|
|
deleteTimes []float64
|
|
|
|
)
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
for _, wb := range q.workspaceBuilds {
|
|
|
|
version, err := q.getTemplateVersionByIDNoLock(ctx, wb.TemplateVersionID)
|
|
|
|
if err != nil {
|
|
|
|
return emptyRow, err
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if version.TemplateID != arg.TemplateID {
|
2022-10-10 20:37:06 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, wb.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return emptyRow, err
|
|
|
|
}
|
|
|
|
if job.CompletedAt.Valid {
|
|
|
|
took := job.CompletedAt.Time.Sub(job.StartedAt.Time).Seconds()
|
|
|
|
switch wb.Transition {
|
|
|
|
case database.WorkspaceTransitionStart:
|
|
|
|
startTimes = append(startTimes, took)
|
|
|
|
case database.WorkspaceTransitionStop:
|
|
|
|
stopTimes = append(stopTimes, took)
|
|
|
|
case database.WorkspaceTransitionDelete:
|
|
|
|
deleteTimes = append(deleteTimes, took)
|
|
|
|
}
|
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
tryPercentile := func(fs []float64, p float64) float64 {
|
|
|
|
if len(fs) == 0 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
sort.Float64s(fs)
|
|
|
|
return fs[int(float64(len(fs))*p/100)]
|
|
|
|
}
|
|
|
|
|
|
|
|
var row database.GetTemplateAverageBuildTimeRow
|
|
|
|
row.Delete50, row.Delete95 = tryPercentile(deleteTimes, 50), tryPercentile(deleteTimes, 95)
|
|
|
|
row.Stop50, row.Stop95 = tryPercentile(stopTimes, 50), tryPercentile(stopTimes, 95)
|
|
|
|
row.Start50, row.Start95 = tryPercentile(startTimes, 50), tryPercentile(startTimes, 95)
|
|
|
|
return row, nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateByID(ctx context.Context, id uuid.UUID) (database.Template, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.getTemplateByIDNoLock(ctx, id)
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateByOrganizationAndName(_ context.Context, arg database.GetTemplateByOrganizationAndNameParams) (database.Template, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.Template{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, template := range q.templates {
|
|
|
|
if template.OrganizationID != arg.OrganizationID {
|
2022-01-24 17:07:42 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if !strings.EqualFold(template.Name, arg.Name) {
|
2022-01-24 17:07:42 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if template.Deleted != arg.Deleted {
|
|
|
|
continue
|
2022-04-28 14:10:17 +00:00
|
|
|
}
|
2023-07-19 20:07:33 +00:00
|
|
|
return q.templateWithUserNoLock(template), nil
|
2022-04-28 14:10:17 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.Template{}, sql.ErrNoRows
|
2022-04-28 14:10:17 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateDAUs(_ context.Context, arg database.GetTemplateDAUsParams) ([]database.GetTemplateDAUsRow, error) {
|
2022-04-29 14:04:19 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
seens := make(map[time.Time]map[uuid.UUID]struct{})
|
|
|
|
|
|
|
|
for _, as := range q.workspaceAgentStats {
|
|
|
|
if as.TemplateID != arg.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if as.ConnectionCount == 0 {
|
2022-04-29 14:04:19 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
date := as.CreatedAt.UTC().Add(time.Duration(arg.TzOffset) * time.Hour * -1).Truncate(time.Hour * 24)
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
dateEntry := seens[date]
|
|
|
|
if dateEntry == nil {
|
|
|
|
dateEntry = make(map[uuid.UUID]struct{})
|
|
|
|
}
|
|
|
|
dateEntry[as.UserID] = struct{}{}
|
|
|
|
seens[date] = dateEntry
|
|
|
|
}
|
2022-09-16 00:06:39 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
seenKeys := maps.Keys(seens)
|
|
|
|
sort.Slice(seenKeys, func(i, j int) bool {
|
|
|
|
return seenKeys[i].Before(seenKeys[j])
|
|
|
|
})
|
2022-04-29 14:04:19 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
var rs []database.GetTemplateDAUsRow
|
|
|
|
for _, key := range seenKeys {
|
|
|
|
ids := seens[key]
|
|
|
|
for id := range ids {
|
|
|
|
rs = append(rs, database.GetTemplateDAUsRow{
|
|
|
|
Date: key,
|
|
|
|
UserID: id,
|
|
|
|
})
|
2022-04-29 14:04:19 +00:00
|
|
|
}
|
|
|
|
}
|
2022-09-16 00:06:39 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return rs, nil
|
2022-04-29 14:04:19 +00:00
|
|
|
}
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateInsights(_ context.Context, arg database.GetTemplateInsightsParams) (database.GetTemplateInsightsRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.GetTemplateInsightsRow{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
templateIDSet := make(map[uuid.UUID]struct{})
|
|
|
|
appUsageIntervalsByUser := make(map[uuid.UUID]map[time.Time]*database.GetTemplateInsightsRow)
|
2023-10-19 08:45:12 +00:00
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
for _, s := range q.workspaceAgentStats {
|
|
|
|
if s.CreatedAt.Before(arg.StartTime) || s.CreatedAt.Equal(arg.EndTime) || s.CreatedAt.After(arg.EndTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, s.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if s.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
templateIDSet[s.TemplateID] = struct{}{}
|
|
|
|
if appUsageIntervalsByUser[s.UserID] == nil {
|
|
|
|
appUsageIntervalsByUser[s.UserID] = make(map[time.Time]*database.GetTemplateInsightsRow)
|
|
|
|
}
|
|
|
|
t := s.CreatedAt.Truncate(time.Minute)
|
|
|
|
if _, ok := appUsageIntervalsByUser[s.UserID][t]; !ok {
|
|
|
|
appUsageIntervalsByUser[s.UserID][t] = &database.GetTemplateInsightsRow{}
|
|
|
|
}
|
|
|
|
|
|
|
|
if s.SessionCountJetBrains > 0 {
|
|
|
|
appUsageIntervalsByUser[s.UserID][t].UsageJetbrainsSeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountVSCode > 0 {
|
|
|
|
appUsageIntervalsByUser[s.UserID][t].UsageVscodeSeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountReconnectingPTY > 0 {
|
|
|
|
appUsageIntervalsByUser[s.UserID][t].UsageReconnectingPtySeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountSSH > 0 {
|
|
|
|
appUsageIntervalsByUser[s.UserID][t].UsageSshSeconds = 60
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
templateIDs := make([]uuid.UUID, 0, len(templateIDSet))
|
|
|
|
for templateID := range templateIDSet {
|
|
|
|
templateIDs = append(templateIDs, templateID)
|
|
|
|
}
|
|
|
|
slices.SortFunc(templateIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
|
|
|
})
|
|
|
|
activeUserIDs := make([]uuid.UUID, 0, len(appUsageIntervalsByUser))
|
|
|
|
for userID := range appUsageIntervalsByUser {
|
|
|
|
activeUserIDs = append(activeUserIDs, userID)
|
|
|
|
}
|
|
|
|
|
|
|
|
result := database.GetTemplateInsightsRow{
|
|
|
|
TemplateIDs: templateIDs,
|
|
|
|
ActiveUserIDs: activeUserIDs,
|
|
|
|
}
|
|
|
|
for _, intervals := range appUsageIntervalsByUser {
|
|
|
|
for _, interval := range intervals {
|
|
|
|
result.UsageJetbrainsSeconds += interval.UsageJetbrainsSeconds
|
|
|
|
result.UsageVscodeSeconds += interval.UsageVscodeSeconds
|
|
|
|
result.UsageReconnectingPtySeconds += interval.UsageReconnectingPtySeconds
|
|
|
|
result.UsageSshSeconds += interval.UsageSshSeconds
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetTemplateInsightsByInterval(ctx context.Context, arg database.GetTemplateInsightsByIntervalParams) ([]database.GetTemplateInsightsByIntervalRow, error) {
|
2023-07-21 18:00:19 +00:00
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
type statByInterval struct {
|
2023-07-21 18:00:19 +00:00
|
|
|
startTime, endTime time.Time
|
|
|
|
userSet map[uuid.UUID]struct{}
|
|
|
|
templateIDSet map[uuid.UUID]struct{}
|
|
|
|
}
|
2023-09-15 12:01:00 +00:00
|
|
|
|
|
|
|
statsByInterval := []statByInterval{{arg.StartTime, arg.StartTime.AddDate(0, 0, int(arg.IntervalDays)), make(map[uuid.UUID]struct{}), make(map[uuid.UUID]struct{})}}
|
|
|
|
for statsByInterval[len(statsByInterval)-1].endTime.Before(arg.EndTime) {
|
|
|
|
statsByInterval = append(statsByInterval, statByInterval{statsByInterval[len(statsByInterval)-1].endTime, statsByInterval[len(statsByInterval)-1].endTime.AddDate(0, 0, int(arg.IntervalDays)), make(map[uuid.UUID]struct{}), make(map[uuid.UUID]struct{})})
|
2023-07-21 18:00:19 +00:00
|
|
|
}
|
2023-09-15 12:01:00 +00:00
|
|
|
if statsByInterval[len(statsByInterval)-1].endTime.After(arg.EndTime) {
|
|
|
|
statsByInterval[len(statsByInterval)-1].endTime = arg.EndTime
|
2023-07-21 18:00:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for _, s := range q.workspaceAgentStats {
|
|
|
|
if s.CreatedAt.Before(arg.StartTime) || s.CreatedAt.Equal(arg.EndTime) || s.CreatedAt.After(arg.EndTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, s.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if s.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
for _, ds := range statsByInterval {
|
2023-07-21 18:00:19 +00:00
|
|
|
if s.CreatedAt.Before(ds.startTime) || s.CreatedAt.Equal(ds.endTime) || s.CreatedAt.After(ds.endTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
ds.userSet[s.UserID] = struct{}{}
|
|
|
|
ds.templateIDSet[s.TemplateID] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
for _, s := range q.workspaceAppStats {
|
2023-08-24 10:36:40 +00:00
|
|
|
w, err := q.getWorkspaceByIDNoLock(ctx, s.WorkspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, w.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
for _, ds := range statsByInterval {
|
2023-08-21 12:08:58 +00:00
|
|
|
// (was.session_started_at >= ts.from_ AND was.session_started_at < ts.to_)
|
|
|
|
// OR (was.session_ended_at > ts.from_ AND was.session_ended_at < ts.to_)
|
|
|
|
// OR (was.session_started_at < ts.from_ AND was.session_ended_at >= ts.to_)
|
2023-08-24 10:36:40 +00:00
|
|
|
if !(((s.SessionStartedAt.After(ds.startTime) || s.SessionStartedAt.Equal(ds.startTime)) && s.SessionStartedAt.Before(ds.endTime)) ||
|
|
|
|
(s.SessionEndedAt.After(ds.startTime) && s.SessionEndedAt.Before(ds.endTime)) ||
|
|
|
|
(s.SessionStartedAt.Before(ds.startTime) && (s.SessionEndedAt.After(ds.endTime) || s.SessionEndedAt.Equal(ds.endTime)))) {
|
2023-08-21 12:08:58 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
ds.userSet[s.UserID] = struct{}{}
|
|
|
|
ds.templateIDSet[w.TemplateID] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-09-15 12:01:00 +00:00
|
|
|
var result []database.GetTemplateInsightsByIntervalRow
|
|
|
|
for _, ds := range statsByInterval {
|
2023-07-21 18:00:19 +00:00
|
|
|
templateIDs := make([]uuid.UUID, 0, len(ds.templateIDSet))
|
|
|
|
for templateID := range ds.templateIDSet {
|
|
|
|
templateIDs = append(templateIDs, templateID)
|
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(templateIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
2023-07-21 18:00:19 +00:00
|
|
|
})
|
2023-09-15 12:01:00 +00:00
|
|
|
result = append(result, database.GetTemplateInsightsByIntervalRow{
|
2023-07-21 18:00:19 +00:00
|
|
|
StartTime: ds.startTime,
|
|
|
|
EndTime: ds.endTime,
|
|
|
|
TemplateIDs: templateIDs,
|
|
|
|
ActiveUsers: int64(len(ds.userSet)),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
2023-10-19 08:45:12 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateInsightsByTemplate(_ context.Context, arg database.GetTemplateInsightsByTemplateParams) ([]database.GetTemplateInsightsByTemplateRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
// map time.Time x TemplateID x UserID x <usage>
|
|
|
|
appUsageByTemplateAndUser := map[time.Time]map[uuid.UUID]map[uuid.UUID]database.GetTemplateInsightsByTemplateRow{}
|
|
|
|
|
|
|
|
// Review agent stats in terms of usage
|
|
|
|
templateIDSet := make(map[uuid.UUID]struct{})
|
|
|
|
|
|
|
|
for _, s := range q.workspaceAgentStats {
|
|
|
|
if s.CreatedAt.Before(arg.StartTime) || s.CreatedAt.Equal(arg.EndTime) || s.CreatedAt.After(arg.EndTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if s.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
t := s.CreatedAt.Truncate(time.Minute)
|
|
|
|
templateIDSet[s.TemplateID] = struct{}{}
|
|
|
|
|
|
|
|
if _, ok := appUsageByTemplateAndUser[t]; !ok {
|
|
|
|
appUsageByTemplateAndUser[t] = make(map[uuid.UUID]map[uuid.UUID]database.GetTemplateInsightsByTemplateRow)
|
|
|
|
}
|
|
|
|
|
|
|
|
if _, ok := appUsageByTemplateAndUser[t][s.TemplateID]; !ok {
|
|
|
|
appUsageByTemplateAndUser[t][s.TemplateID] = make(map[uuid.UUID]database.GetTemplateInsightsByTemplateRow)
|
|
|
|
}
|
|
|
|
|
|
|
|
if _, ok := appUsageByTemplateAndUser[t][s.TemplateID][s.UserID]; !ok {
|
|
|
|
appUsageByTemplateAndUser[t][s.TemplateID][s.UserID] = database.GetTemplateInsightsByTemplateRow{}
|
|
|
|
}
|
|
|
|
|
|
|
|
u := appUsageByTemplateAndUser[t][s.TemplateID][s.UserID]
|
|
|
|
if s.SessionCountJetBrains > 0 {
|
|
|
|
u.UsageJetbrainsSeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountVSCode > 0 {
|
|
|
|
u.UsageVscodeSeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountReconnectingPTY > 0 {
|
|
|
|
u.UsageReconnectingPtySeconds = 60
|
|
|
|
}
|
|
|
|
if s.SessionCountSSH > 0 {
|
|
|
|
u.UsageSshSeconds = 60
|
|
|
|
}
|
|
|
|
appUsageByTemplateAndUser[t][s.TemplateID][s.UserID] = u
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sort used templates
|
|
|
|
templateIDs := make([]uuid.UUID, 0, len(templateIDSet))
|
|
|
|
for templateID := range templateIDSet {
|
|
|
|
templateIDs = append(templateIDs, templateID)
|
|
|
|
}
|
|
|
|
slices.SortFunc(templateIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
|
|
|
})
|
|
|
|
|
|
|
|
// Build result
|
|
|
|
var result []database.GetTemplateInsightsByTemplateRow
|
|
|
|
for _, templateID := range templateIDs {
|
|
|
|
r := database.GetTemplateInsightsByTemplateRow{
|
|
|
|
TemplateID: templateID,
|
|
|
|
}
|
|
|
|
|
|
|
|
uniqueUsers := map[uuid.UUID]struct{}{}
|
|
|
|
|
|
|
|
for _, mTemplateUserUsage := range appUsageByTemplateAndUser {
|
|
|
|
mUserUsage, ok := mTemplateUserUsage[templateID]
|
|
|
|
if !ok {
|
|
|
|
continue // template was not used in this time window
|
|
|
|
}
|
|
|
|
|
|
|
|
for userID, usage := range mUserUsage {
|
|
|
|
uniqueUsers[userID] = struct{}{}
|
|
|
|
|
|
|
|
r.UsageJetbrainsSeconds += usage.UsageJetbrainsSeconds
|
|
|
|
r.UsageVscodeSeconds += usage.UsageVscodeSeconds
|
|
|
|
r.UsageReconnectingPtySeconds += usage.UsageReconnectingPtySeconds
|
|
|
|
r.UsageSshSeconds += usage.UsageSshSeconds
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
r.ActiveUsers = int64(len(uniqueUsers))
|
|
|
|
|
|
|
|
result = append(result, r)
|
|
|
|
}
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
2023-08-03 14:43:23 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateParameterInsights(ctx context.Context, arg database.GetTemplateParameterInsightsParams) ([]database.GetTemplateParameterInsightsRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
// WITH latest_workspace_builds ...
|
|
|
|
latestWorkspaceBuilds := make(map[uuid.UUID]database.WorkspaceBuildTable)
|
|
|
|
for _, wb := range q.workspaceBuilds {
|
|
|
|
if wb.CreatedAt.Before(arg.StartTime) || wb.CreatedAt.Equal(arg.EndTime) || wb.CreatedAt.After(arg.EndTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if latestWorkspaceBuilds[wb.WorkspaceID].BuildNumber < wb.BuildNumber {
|
|
|
|
latestWorkspaceBuilds[wb.WorkspaceID] = wb
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if len(arg.TemplateIDs) > 0 {
|
|
|
|
for wsID := range latestWorkspaceBuilds {
|
|
|
|
ws, err := q.getWorkspaceByIDNoLock(ctx, wsID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slices.Contains(arg.TemplateIDs, ws.TemplateID) {
|
|
|
|
delete(latestWorkspaceBuilds, wsID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// WITH unique_template_params ...
|
|
|
|
num := int64(0)
|
|
|
|
uniqueTemplateParams := make(map[string]*database.GetTemplateParameterInsightsRow)
|
|
|
|
uniqueTemplateParamWorkspaceBuildIDs := make(map[string][]uuid.UUID)
|
|
|
|
for _, wb := range latestWorkspaceBuilds {
|
|
|
|
tv, err := q.getTemplateVersionByIDNoLock(ctx, wb.TemplateVersionID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
for _, tvp := range q.templateVersionParameters {
|
|
|
|
if tvp.TemplateVersionID != tv.ID {
|
|
|
|
continue
|
|
|
|
}
|
2023-08-24 10:36:40 +00:00
|
|
|
// GROUP BY tvp.name, tvp.type, tvp.display_name, tvp.description, tvp.options
|
|
|
|
key := fmt.Sprintf("%s:%s:%s:%s:%s", tvp.Name, tvp.Type, tvp.DisplayName, tvp.Description, tvp.Options)
|
2023-08-03 14:43:23 +00:00
|
|
|
if _, ok := uniqueTemplateParams[key]; !ok {
|
|
|
|
num++
|
|
|
|
uniqueTemplateParams[key] = &database.GetTemplateParameterInsightsRow{
|
|
|
|
Num: num,
|
|
|
|
Name: tvp.Name,
|
2023-08-07 16:11:44 +00:00
|
|
|
Type: tvp.Type,
|
2023-08-03 14:43:23 +00:00
|
|
|
DisplayName: tvp.DisplayName,
|
|
|
|
Description: tvp.Description,
|
|
|
|
Options: tvp.Options,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
uniqueTemplateParams[key].TemplateIDs = append(uniqueTemplateParams[key].TemplateIDs, tv.TemplateID.UUID)
|
|
|
|
uniqueTemplateParamWorkspaceBuildIDs[key] = append(uniqueTemplateParamWorkspaceBuildIDs[key], wb.ID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// SELECT ...
|
|
|
|
counts := make(map[string]map[string]int64)
|
|
|
|
for key, utp := range uniqueTemplateParams {
|
|
|
|
for _, wbp := range q.workspaceBuildParameters {
|
|
|
|
if !slices.Contains(uniqueTemplateParamWorkspaceBuildIDs[key], wbp.WorkspaceBuildID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if wbp.Name != utp.Name {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if counts[key] == nil {
|
|
|
|
counts[key] = make(map[string]int64)
|
|
|
|
}
|
|
|
|
counts[key][wbp.Value]++
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
var rows []database.GetTemplateParameterInsightsRow
|
|
|
|
for key, utp := range uniqueTemplateParams {
|
|
|
|
for value, count := range counts[key] {
|
|
|
|
rows = append(rows, database.GetTemplateParameterInsightsRow{
|
|
|
|
Num: utp.Num,
|
|
|
|
TemplateIDs: uniqueSortedUUIDs(utp.TemplateIDs),
|
|
|
|
Name: utp.Name,
|
|
|
|
DisplayName: utp.DisplayName,
|
2023-08-07 16:11:44 +00:00
|
|
|
Type: utp.Type,
|
2023-08-03 14:43:23 +00:00
|
|
|
Description: utp.Description,
|
|
|
|
Options: utp.Options,
|
|
|
|
Value: value,
|
|
|
|
Count: count,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-08-24 10:36:40 +00:00
|
|
|
// NOTE(mafredri): Add sorting if we decide on how to handle PostgreSQL collations.
|
|
|
|
// ORDER BY utp.name, utp.type, utp.display_name, utp.description, utp.options, wbp.value
|
2023-08-03 14:43:23 +00:00
|
|
|
return rows, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionByID(ctx context.Context, templateVersionID uuid.UUID) (database.TemplateVersion, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.getTemplateVersionByIDNoLock(ctx, templateVersionID)
|
2022-02-01 05:36:15 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionByJobID(_ context.Context, jobID uuid.UUID) (database.TemplateVersion, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-03-07 17:40:54 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.JobID != jobID {
|
|
|
|
continue
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return q.templateVersionWithUserNoLock(templateVersion), nil
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.TemplateVersion{}, sql.ErrNoRows
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionByTemplateIDAndName(_ context.Context, arg database.GetTemplateVersionByTemplateIDAndNameParams) (database.TemplateVersion, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.TemplateVersion{}, err
|
|
|
|
}
|
|
|
|
|
2022-04-11 21:06:15 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.TemplateID != arg.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if !strings.EqualFold(templateVersion.Name, arg.Name) {
|
|
|
|
continue
|
2022-04-11 21:06:15 +00:00
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return q.templateVersionWithUserNoLock(templateVersion), nil
|
2022-04-11 21:06:15 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.TemplateVersion{}, sql.ErrNoRows
|
2022-04-11 21:06:15 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionParameters(_ context.Context, templateVersionID uuid.UUID) ([]database.TemplateVersionParameter, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
parameters := make([]database.TemplateVersionParameter, 0)
|
|
|
|
for _, param := range q.templateVersionParameters {
|
|
|
|
if param.TemplateVersionID != templateVersionID {
|
|
|
|
continue
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
parameters = append(parameters, param)
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-30 10:41:55 +00:00
|
|
|
sort.Slice(parameters, func(i, j int) bool {
|
|
|
|
if parameters[i].DisplayOrder != parameters[j].DisplayOrder {
|
|
|
|
return parameters[i].DisplayOrder < parameters[j].DisplayOrder
|
|
|
|
}
|
|
|
|
return strings.ToLower(parameters[i].Name) < strings.ToLower(parameters[j].Name)
|
|
|
|
})
|
2023-06-12 22:40:58 +00:00
|
|
|
return parameters, nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionVariables(_ context.Context, templateVersionID uuid.UUID) ([]database.TemplateVersionVariable, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
variables := make([]database.TemplateVersionVariable, 0)
|
|
|
|
for _, variable := range q.templateVersionVariables {
|
|
|
|
if variable.TemplateVersionID != templateVersionID {
|
|
|
|
continue
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
variables = append(variables, variable)
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return variables, nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionsByIDs(_ context.Context, ids []uuid.UUID) ([]database.TemplateVersion, error) {
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
versions := make([]database.TemplateVersion, 0)
|
|
|
|
for _, version := range q.templateVersions {
|
|
|
|
for _, id := range ids {
|
|
|
|
if id == version.ID {
|
2023-07-25 13:14:38 +00:00
|
|
|
versions = append(versions, q.templateVersionWithUserNoLock(version))
|
2023-06-12 22:40:58 +00:00
|
|
|
break
|
|
|
|
}
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(versions) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
return versions, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionsByTemplateID(_ context.Context, arg database.GetTemplateVersionsByTemplateIDParams) (version []database.TemplateVersion, err error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return version, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-06-04 20:13:37 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.TemplateID.UUID != arg.TemplateID {
|
2022-06-04 20:13:37 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-10-10 15:52:42 +00:00
|
|
|
if arg.Archived.Valid && arg.Archived.Bool != templateVersion.Archived {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
version = append(version, q.templateVersionWithUserNoLock(templateVersion))
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Database orders by created_at
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(version, func(a, b database.TemplateVersion) int {
|
2023-06-12 22:40:58 +00:00
|
|
|
if a.CreatedAt.Equal(b.CreatedAt) {
|
|
|
|
// Technically the postgres database also orders by uuid. So match
|
|
|
|
// that behavior
|
2023-08-09 19:50:26 +00:00
|
|
|
return slice.Ascending(a.ID.String(), b.ID.String())
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
if a.CreatedAt.Before(b.CreatedAt) {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
return 1
|
2023-06-12 22:40:58 +00:00
|
|
|
})
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.AfterID != uuid.Nil {
|
|
|
|
found := false
|
|
|
|
for i, v := range version {
|
|
|
|
if v.ID == arg.AfterID {
|
|
|
|
// We want to return all users after index i.
|
|
|
|
version = version[i+1:]
|
|
|
|
found = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2023-02-13 09:54:43 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// If no users after the time, then we return an empty list.
|
|
|
|
if !found {
|
|
|
|
return nil, sql.ErrNoRows
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.OffsetOpt > 0 {
|
|
|
|
if int(arg.OffsetOpt) > len(version)-1 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
version = version[arg.OffsetOpt:]
|
|
|
|
}
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if arg.LimitOpt > 0 {
|
|
|
|
if int(arg.LimitOpt) > len(version) {
|
|
|
|
arg.LimitOpt = int32(len(version))
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
version = version[:arg.LimitOpt]
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
if len(version) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
return version, nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateVersionsCreatedAfter(_ context.Context, after time.Time) ([]database.TemplateVersion, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
versions := make([]database.TemplateVersion, 0)
|
|
|
|
for _, version := range q.templateVersions {
|
|
|
|
if version.CreatedAt.After(after) {
|
2023-07-25 13:14:38 +00:00
|
|
|
versions = append(versions, q.templateVersionWithUserNoLock(version))
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return versions, nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplates(_ context.Context) ([]database.Template, error) {
|
2022-09-16 18:54:23 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
templates := slices.Clone(q.templates)
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(templates, func(a, b database.TemplateTable) int {
|
|
|
|
if a.Name != b.Name {
|
|
|
|
return slice.Ascending(a.Name, b.Name)
|
2022-09-16 18:54:23 +00:00
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
return slice.Ascending(a.ID.String(), b.ID.String())
|
2023-06-12 22:40:58 +00:00
|
|
|
})
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
return q.templatesWithUserNoLock(templates), nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplatesWithFilter(ctx context.Context, arg database.GetTemplatesWithFilterParams) ([]database.Template, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return nil, err
|
2022-09-16 18:54:23 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return q.GetAuthorizedTemplates(ctx, arg, nil)
|
2022-09-16 18:54:23 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUnexpiredLicenses(_ context.Context) ([]database.License, error) {
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
now := time.Now()
|
|
|
|
var results []database.License
|
|
|
|
for _, l := range q.licenses {
|
|
|
|
if l.Exp.After(now) {
|
|
|
|
results = append(results, l)
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
sort.Slice(results, func(i, j int) bool { return results[i].ID < results[j].ID })
|
|
|
|
return results, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-09-26 16:42:16 +00:00
|
|
|
func (q *FakeQuerier) GetUserActivityInsights(ctx context.Context, arg database.GetUserActivityInsightsParams) ([]database.GetUserActivityInsightsRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
type uniqueKey struct {
|
|
|
|
TemplateID uuid.UUID
|
|
|
|
UserID uuid.UUID
|
|
|
|
}
|
|
|
|
|
|
|
|
combinedStats := make(map[uniqueKey]map[time.Time]int64)
|
|
|
|
|
|
|
|
// Get application stats
|
|
|
|
for _, s := range q.workspaceAppStats {
|
|
|
|
if !(((s.SessionStartedAt.After(arg.StartTime) || s.SessionStartedAt.Equal(arg.StartTime)) && s.SessionStartedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionEndedAt.After(arg.StartTime) && s.SessionEndedAt.Before(arg.EndTime)) ||
|
|
|
|
(s.SessionStartedAt.Before(arg.StartTime) && (s.SessionEndedAt.After(arg.EndTime) || s.SessionEndedAt.Equal(arg.EndTime)))) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
w, err := q.getWorkspaceByIDNoLock(ctx, s.WorkspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, w.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
key := uniqueKey{
|
|
|
|
TemplateID: w.TemplateID,
|
|
|
|
UserID: s.UserID,
|
|
|
|
}
|
|
|
|
if combinedStats[key] == nil {
|
|
|
|
combinedStats[key] = make(map[time.Time]int64)
|
|
|
|
}
|
|
|
|
|
|
|
|
t := s.SessionStartedAt.Truncate(time.Minute)
|
|
|
|
if t.Before(arg.StartTime) {
|
|
|
|
t = arg.StartTime
|
|
|
|
}
|
|
|
|
for t.Before(s.SessionEndedAt) && t.Before(arg.EndTime) {
|
|
|
|
combinedStats[key][t] = 60
|
|
|
|
t = t.Add(1 * time.Minute)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Get session stats
|
|
|
|
for _, s := range q.workspaceAgentStats {
|
|
|
|
if s.CreatedAt.Before(arg.StartTime) || s.CreatedAt.Equal(arg.EndTime) || s.CreatedAt.After(arg.EndTime) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, s.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if s.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
key := uniqueKey{
|
|
|
|
TemplateID: s.TemplateID,
|
|
|
|
UserID: s.UserID,
|
|
|
|
}
|
|
|
|
|
|
|
|
if combinedStats[key] == nil {
|
|
|
|
combinedStats[key] = make(map[time.Time]int64)
|
|
|
|
}
|
|
|
|
|
|
|
|
if s.SessionCountJetBrains > 0 || s.SessionCountVSCode > 0 || s.SessionCountReconnectingPTY > 0 || s.SessionCountSSH > 0 {
|
|
|
|
t := s.CreatedAt.Truncate(time.Minute)
|
|
|
|
combinedStats[key][t] = 60
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Use temporary maps for aggregation purposes
|
|
|
|
mUserIDTemplateIDs := map[uuid.UUID]map[uuid.UUID]struct{}{}
|
|
|
|
mUserIDUsageSeconds := map[uuid.UUID]int64{}
|
|
|
|
|
|
|
|
for key, times := range combinedStats {
|
|
|
|
if mUserIDTemplateIDs[key.UserID] == nil {
|
|
|
|
mUserIDTemplateIDs[key.UserID] = make(map[uuid.UUID]struct{})
|
|
|
|
mUserIDUsageSeconds[key.UserID] = 0
|
|
|
|
}
|
|
|
|
|
|
|
|
if _, ok := mUserIDTemplateIDs[key.UserID][key.TemplateID]; !ok {
|
|
|
|
mUserIDTemplateIDs[key.UserID][key.TemplateID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, t := range times {
|
|
|
|
mUserIDUsageSeconds[key.UserID] += t
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
userIDs := make([]uuid.UUID, 0, len(mUserIDUsageSeconds))
|
|
|
|
for userID := range mUserIDUsageSeconds {
|
|
|
|
userIDs = append(userIDs, userID)
|
|
|
|
}
|
|
|
|
sort.Slice(userIDs, func(i, j int) bool {
|
|
|
|
return userIDs[i].String() < userIDs[j].String()
|
|
|
|
})
|
|
|
|
|
|
|
|
// Finally, select stats
|
|
|
|
var rows []database.GetUserActivityInsightsRow
|
|
|
|
|
|
|
|
for _, userID := range userIDs {
|
|
|
|
user, err := q.getUserByIDNoLock(userID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
tids := mUserIDTemplateIDs[userID]
|
|
|
|
templateIDs := make([]uuid.UUID, 0, len(tids))
|
|
|
|
for key := range tids {
|
|
|
|
templateIDs = append(templateIDs, key)
|
|
|
|
}
|
|
|
|
sort.Slice(templateIDs, func(i, j int) bool {
|
|
|
|
return templateIDs[i].String() < templateIDs[j].String()
|
|
|
|
})
|
|
|
|
|
|
|
|
row := database.GetUserActivityInsightsRow{
|
|
|
|
UserID: user.ID,
|
|
|
|
Username: user.Username,
|
|
|
|
AvatarURL: user.AvatarURL,
|
|
|
|
TemplateIDs: templateIDs,
|
|
|
|
UsageSeconds: mUserIDUsageSeconds[userID],
|
|
|
|
}
|
|
|
|
|
|
|
|
rows = append(rows, row)
|
|
|
|
}
|
|
|
|
return rows, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUserByEmailOrUsername(_ context.Context, arg database.GetUserByEmailOrUsernameParams) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.User{}, err
|
2022-08-18 15:57:46 +00:00
|
|
|
}
|
2022-09-16 00:06:39 +00:00
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, user := range q.users {
|
|
|
|
if !user.Deleted && (strings.EqualFold(user.Email, arg.Email) || strings.EqualFold(user.Username, arg.Username)) {
|
|
|
|
return user, nil
|
2022-08-18 15:57:46 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-08-18 15:57:46 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUserByID(_ context.Context, id uuid.UUID) (database.User, error) {
|
2022-08-01 21:53:05 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.getUserByIDNoLock(id)
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUserCount(_ context.Context) (int64, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
existing := int64(0)
|
|
|
|
for _, u := range q.users {
|
|
|
|
if !u.Deleted {
|
|
|
|
existing++
|
2022-08-01 21:53:05 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return existing, nil
|
2022-08-01 21:53:05 +00:00
|
|
|
}
|
|
|
|
|
2023-07-21 18:00:19 +00:00
|
|
|
func (q *FakeQuerier) GetUserLatencyInsights(_ context.Context, arg database.GetUserLatencyInsightsParams) ([]database.GetUserLatencyInsightsRow, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
latenciesByUserID := make(map[uuid.UUID][]float64)
|
|
|
|
seenTemplatesByUserID := make(map[uuid.UUID]map[uuid.UUID]struct{})
|
|
|
|
for _, s := range q.workspaceAgentStats {
|
|
|
|
if len(arg.TemplateIDs) > 0 && !slices.Contains(arg.TemplateIDs, s.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-21 20:51:35 +00:00
|
|
|
if !arg.StartTime.Equal(s.CreatedAt) && (s.CreatedAt.Before(arg.StartTime) || s.CreatedAt.After(arg.EndTime)) {
|
2023-07-21 18:00:19 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
if s.ConnectionCount == 0 {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-21 20:51:35 +00:00
|
|
|
if s.ConnectionMedianLatencyMS <= 0 {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-21 18:00:19 +00:00
|
|
|
|
|
|
|
latenciesByUserID[s.UserID] = append(latenciesByUserID[s.UserID], s.ConnectionMedianLatencyMS)
|
|
|
|
if seenTemplatesByUserID[s.UserID] == nil {
|
|
|
|
seenTemplatesByUserID[s.UserID] = make(map[uuid.UUID]struct{})
|
|
|
|
}
|
|
|
|
seenTemplatesByUserID[s.UserID][s.TemplateID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
tryPercentile := func(fs []float64, p float64) float64 {
|
|
|
|
if len(fs) == 0 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
sort.Float64s(fs)
|
|
|
|
return fs[int(float64(len(fs))*p/100)]
|
|
|
|
}
|
|
|
|
|
|
|
|
var rows []database.GetUserLatencyInsightsRow
|
|
|
|
for userID, latencies := range latenciesByUserID {
|
|
|
|
sort.Float64s(latencies)
|
|
|
|
templateIDSet := seenTemplatesByUserID[userID]
|
|
|
|
templateIDs := make([]uuid.UUID, 0, len(templateIDSet))
|
|
|
|
for templateID := range templateIDSet {
|
|
|
|
templateIDs = append(templateIDs, templateID)
|
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(templateIDs, func(a, b uuid.UUID) int {
|
|
|
|
return slice.Ascending(a.String(), b.String())
|
2023-07-21 18:00:19 +00:00
|
|
|
})
|
|
|
|
user, err := q.getUserByIDNoLock(userID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
row := database.GetUserLatencyInsightsRow{
|
|
|
|
UserID: userID,
|
|
|
|
Username: user.Username,
|
2023-07-25 10:15:46 +00:00
|
|
|
AvatarURL: user.AvatarURL,
|
2023-07-21 18:00:19 +00:00
|
|
|
TemplateIDs: templateIDs,
|
|
|
|
WorkspaceConnectionLatency50: tryPercentile(latencies, 50),
|
|
|
|
WorkspaceConnectionLatency95: tryPercentile(latencies, 95),
|
|
|
|
}
|
|
|
|
rows = append(rows, row)
|
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(rows, func(a, b database.GetUserLatencyInsightsRow) int {
|
|
|
|
return slice.Ascending(a.UserID.String(), b.UserID.String())
|
2023-07-21 18:00:19 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
return rows, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUserLinkByLinkedID(_ context.Context, id string) (database.UserLink, error) {
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-03-07 17:40:54 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, link := range q.userLinks {
|
|
|
|
if link.LinkedID == id {
|
|
|
|
return link, nil
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.UserLink{}, sql.ErrNoRows
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUserLinkByUserIDLoginType(_ context.Context, params database.GetUserLinkByUserIDLoginTypeParams) (database.UserLink, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(params); err != nil {
|
|
|
|
return database.UserLink{}, err
|
|
|
|
}
|
|
|
|
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, link := range q.userLinks {
|
|
|
|
if link.UserID == params.UserID && link.LoginType == params.LoginType {
|
|
|
|
return link, nil
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.UserLink{}, sql.ErrNoRows
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
func (q *FakeQuerier) GetUserLinksByUserID(_ context.Context, userID uuid.UUID) ([]database.UserLink, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
uls := make([]database.UserLink, 0)
|
|
|
|
for _, ul := range q.userLinks {
|
|
|
|
if ul.UserID == userID {
|
|
|
|
uls = append(uls, ul)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return uls, nil
|
|
|
|
}
|
|
|
|
|
2024-01-30 22:02:21 +00:00
|
|
|
func (q *FakeQuerier) GetUserWorkspaceBuildParameters(_ context.Context, params database.GetUserWorkspaceBuildParametersParams) ([]database.GetUserWorkspaceBuildParametersRow, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
userWorkspaceIDs := make(map[uuid.UUID]struct{})
|
|
|
|
for _, ws := range q.workspaces {
|
|
|
|
if ws.OwnerID != params.OwnerID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if ws.TemplateID != params.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
userWorkspaceIDs[ws.ID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
userWorkspaceBuilds := make(map[uuid.UUID]struct{})
|
|
|
|
for _, wb := range q.workspaceBuilds {
|
|
|
|
if _, ok := userWorkspaceIDs[wb.WorkspaceID]; !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
userWorkspaceBuilds[wb.ID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
templateVersions := make(map[uuid.UUID]struct{})
|
|
|
|
for _, tv := range q.templateVersions {
|
|
|
|
if tv.TemplateID.UUID != params.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
templateVersions[tv.ID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
tvps := make(map[string]struct{})
|
|
|
|
for _, tvp := range q.templateVersionParameters {
|
|
|
|
if _, ok := templateVersions[tvp.TemplateVersionID]; !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if _, ok := tvps[tvp.Name]; !ok && !tvp.Ephemeral {
|
|
|
|
tvps[tvp.Name] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
userWorkspaceBuildParameters := make(map[string]database.GetUserWorkspaceBuildParametersRow)
|
|
|
|
for _, wbp := range q.workspaceBuildParameters {
|
|
|
|
if _, ok := userWorkspaceBuilds[wbp.WorkspaceBuildID]; !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if _, ok := tvps[wbp.Name]; !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
userWorkspaceBuildParameters[wbp.Name] = database.GetUserWorkspaceBuildParametersRow{
|
|
|
|
Name: wbp.Name,
|
|
|
|
Value: wbp.Value,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return maps.Values(userWorkspaceBuildParameters), nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUsers(_ context.Context, params database.GetUsersParams) ([]database.GetUsersRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(params); err != nil {
|
2023-01-23 11:14:47 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-07 21:32:37 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Avoid side-effect of sorting.
|
|
|
|
users := make([]database.User, len(q.users))
|
|
|
|
copy(users, q.users)
|
|
|
|
|
|
|
|
// Database orders by username
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(users, func(a, b database.User) int {
|
|
|
|
return slice.Ascending(strings.ToLower(a.Username), strings.ToLower(b.Username))
|
2023-06-12 22:40:58 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
// Filter out deleted since they should never be returned..
|
|
|
|
tmp := make([]database.User, 0, len(users))
|
|
|
|
for _, user := range users {
|
|
|
|
if !user.Deleted {
|
|
|
|
tmp = append(tmp, user)
|
2022-02-07 21:32:37 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
users = tmp
|
|
|
|
|
|
|
|
if params.AfterID != uuid.Nil {
|
|
|
|
found := false
|
|
|
|
for i, v := range users {
|
|
|
|
if v.ID == params.AfterID {
|
|
|
|
// We want to return all users after index i.
|
|
|
|
users = users[i+1:]
|
|
|
|
found = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If no users after the time, then we return an empty list.
|
|
|
|
if !found {
|
|
|
|
return []database.GetUsersRow{}, nil
|
2022-02-07 21:32:37 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if params.Search != "" {
|
|
|
|
tmp := make([]database.User, 0, len(users))
|
|
|
|
for i, user := range users {
|
|
|
|
if strings.Contains(strings.ToLower(user.Email), strings.ToLower(params.Search)) {
|
|
|
|
tmp = append(tmp, users[i])
|
|
|
|
} else if strings.Contains(strings.ToLower(user.Username), strings.ToLower(params.Search)) {
|
|
|
|
tmp = append(tmp, users[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
users = tmp
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(params.Status) > 0 {
|
|
|
|
usersFilteredByStatus := make([]database.User, 0, len(users))
|
|
|
|
for i, user := range users {
|
|
|
|
if slice.ContainsCompare(params.Status, user.Status, func(a, b database.UserStatus) bool {
|
|
|
|
return strings.EqualFold(string(a), string(b))
|
|
|
|
}) {
|
|
|
|
usersFilteredByStatus = append(usersFilteredByStatus, users[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
users = usersFilteredByStatus
|
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(params.RbacRole) > 0 && !slice.Contains(params.RbacRole, rbac.RoleMember()) {
|
|
|
|
usersFilteredByRole := make([]database.User, 0, len(users))
|
|
|
|
for i, user := range users {
|
|
|
|
if slice.OverlapCompare(params.RbacRole, user.RBACRoles, strings.EqualFold) {
|
|
|
|
usersFilteredByRole = append(usersFilteredByRole, users[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
users = usersFilteredByRole
|
2022-06-01 19:58:55 +00:00
|
|
|
}
|
|
|
|
|
2023-06-22 20:24:48 +00:00
|
|
|
if !params.LastSeenBefore.IsZero() {
|
|
|
|
usersFilteredByLastSeen := make([]database.User, 0, len(users))
|
|
|
|
for i, user := range users {
|
|
|
|
if user.LastSeenAt.Before(params.LastSeenBefore) {
|
|
|
|
usersFilteredByLastSeen = append(usersFilteredByLastSeen, users[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
users = usersFilteredByLastSeen
|
|
|
|
}
|
|
|
|
|
|
|
|
if !params.LastSeenAfter.IsZero() {
|
|
|
|
usersFilteredByLastSeen := make([]database.User, 0, len(users))
|
|
|
|
for i, user := range users {
|
|
|
|
if user.LastSeenAt.After(params.LastSeenAfter) {
|
|
|
|
usersFilteredByLastSeen = append(usersFilteredByLastSeen, users[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
users = usersFilteredByLastSeen
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
beforePageCount := len(users)
|
|
|
|
|
|
|
|
if params.OffsetOpt > 0 {
|
|
|
|
if int(params.OffsetOpt) > len(users)-1 {
|
|
|
|
return []database.GetUsersRow{}, nil
|
2023-04-24 20:48:26 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
users = users[params.OffsetOpt:]
|
2023-04-24 20:48:26 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if params.LimitOpt > 0 {
|
|
|
|
if int(params.LimitOpt) > len(users) {
|
|
|
|
params.LimitOpt = int32(len(users))
|
|
|
|
}
|
|
|
|
users = users[:params.LimitOpt]
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
return convertUsers(users, int64(beforePageCount)), nil
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetUsersByIDs(_ context.Context, ids []uuid.UUID) ([]database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-03-31 20:26:19 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
users := make([]database.User, 0)
|
|
|
|
for _, user := range q.users {
|
|
|
|
for _, id := range ids {
|
|
|
|
if user.ID != id {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
users = append(users, user)
|
|
|
|
}
|
2023-03-31 20:26:19 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return users, nil
|
|
|
|
}
|
2023-03-31 20:26:19 +00:00
|
|
|
|
2023-08-21 14:49:26 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentAndOwnerByAuthToken(_ context.Context, authToken uuid.UUID) (database.GetWorkspaceAgentAndOwnerByAuthTokenRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-08-21 14:49:26 +00:00
|
|
|
// map of build number -> row
|
|
|
|
rows := make(map[int32]database.GetWorkspaceAgentAndOwnerByAuthTokenRow)
|
|
|
|
|
|
|
|
// We want to return the latest build number
|
|
|
|
var latestBuildNumber int32
|
|
|
|
|
|
|
|
for _, agt := range q.workspaceAgents {
|
|
|
|
if agt.AuthToken != authToken {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
// get the related workspace and user
|
|
|
|
for _, res := range q.workspaceResources {
|
|
|
|
if agt.ResourceID != res.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
for _, build := range q.workspaceBuilds {
|
|
|
|
if build.JobID != res.JobID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
for _, ws := range q.workspaces {
|
|
|
|
if build.WorkspaceID != ws.ID {
|
|
|
|
continue
|
|
|
|
}
|
2023-12-07 18:55:29 +00:00
|
|
|
if ws.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
2023-08-21 14:49:26 +00:00
|
|
|
var row database.GetWorkspaceAgentAndOwnerByAuthTokenRow
|
|
|
|
row.WorkspaceID = ws.ID
|
2023-12-20 17:38:49 +00:00
|
|
|
row.TemplateID = ws.TemplateID
|
2023-08-21 14:49:26 +00:00
|
|
|
usr, err := q.getUserByIDNoLock(ws.OwnerID)
|
|
|
|
if err != nil {
|
|
|
|
return database.GetWorkspaceAgentAndOwnerByAuthTokenRow{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
row.OwnerID = usr.ID
|
|
|
|
row.OwnerRoles = append(usr.RBACRoles, "member")
|
|
|
|
// We also need to get org roles for the user
|
|
|
|
row.OwnerName = usr.Username
|
|
|
|
row.WorkspaceAgent = agt
|
2023-12-20 17:38:49 +00:00
|
|
|
row.TemplateVersionID = build.TemplateVersionID
|
2023-08-21 14:49:26 +00:00
|
|
|
for _, mem := range q.organizationMembers {
|
|
|
|
if mem.UserID == usr.ID {
|
|
|
|
row.OwnerRoles = append(row.OwnerRoles, fmt.Sprintf("organization-member:%s", mem.OrganizationID.String()))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// And group memberships
|
|
|
|
for _, groupMem := range q.groupMembers {
|
|
|
|
if groupMem.UserID == usr.ID {
|
|
|
|
row.OwnerGroups = append(row.OwnerGroups, groupMem.GroupID.String())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Keep track of the latest build number
|
|
|
|
rows[build.BuildNumber] = row
|
|
|
|
if build.BuildNumber > latestBuildNumber {
|
|
|
|
latestBuildNumber = build.BuildNumber
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2023-03-31 20:26:19 +00:00
|
|
|
}
|
|
|
|
}
|
2023-08-21 14:49:26 +00:00
|
|
|
|
|
|
|
if len(rows) == 0 {
|
|
|
|
return database.GetWorkspaceAgentAndOwnerByAuthTokenRow{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
// Return the row related to the latest build
|
|
|
|
return rows[latestBuildNumber], nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-03-31 20:26:19 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentByID(ctx context.Context, id uuid.UUID) (database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.getWorkspaceAgentByIDNoLock(ctx, id)
|
2023-03-31 20:26:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentByInstanceID(_ context.Context, instanceID string) (database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-03-31 20:26:19 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// The schema sorts this by created at, so we iterate the array backwards.
|
|
|
|
for i := len(q.workspaceAgents) - 1; i >= 0; i-- {
|
|
|
|
agent := q.workspaceAgents[i]
|
|
|
|
if agent.AuthInstanceID.Valid && agent.AuthInstanceID.String == instanceID {
|
|
|
|
return agent, nil
|
|
|
|
}
|
2023-03-31 20:26:19 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.WorkspaceAgent{}, sql.ErrNoRows
|
2023-03-31 20:26:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentLifecycleStateByID(ctx context.Context, id uuid.UUID) (database.GetWorkspaceAgentLifecycleStateByIDRow, error) {
|
2023-06-20 11:41:55 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
agent, err := q.getWorkspaceAgentByIDNoLock(ctx, id)
|
|
|
|
if err != nil {
|
|
|
|
return database.GetWorkspaceAgentLifecycleStateByIDRow{}, err
|
|
|
|
}
|
|
|
|
return database.GetWorkspaceAgentLifecycleStateByIDRow{
|
|
|
|
LifecycleState: agent.LifecycleState,
|
|
|
|
StartedAt: agent.StartedAt,
|
|
|
|
ReadyAt: agent.ReadyAt,
|
|
|
|
}, nil
|
|
|
|
}
|
|
|
|
|
2023-09-25 21:47:17 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentLogSourcesByAgentIDs(_ context.Context, ids []uuid.UUID) ([]database.WorkspaceAgentLogSource, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
logSources := make([]database.WorkspaceAgentLogSource, 0)
|
|
|
|
for _, logSource := range q.workspaceAgentLogSources {
|
|
|
|
for _, id := range ids {
|
|
|
|
if logSource.WorkspaceAgentID == id {
|
|
|
|
logSources = append(logSources, logSource)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return logSources, nil
|
|
|
|
}
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentLogsAfter(_ context.Context, arg database.GetWorkspaceAgentLogsAfterParams) ([]database.WorkspaceAgentLog, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-07 21:32:37 +00:00
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
logs := []database.WorkspaceAgentLog{}
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, log := range q.workspaceAgentLogs {
|
|
|
|
if log.AgentID != arg.AgentID {
|
|
|
|
continue
|
|
|
|
}
|
2023-06-16 14:14:22 +00:00
|
|
|
if arg.CreatedAfter != 0 && log.ID <= arg.CreatedAfter {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
logs = append(logs, log)
|
2022-02-07 21:32:37 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return logs, nil
|
2022-02-07 21:32:37 +00:00
|
|
|
}
|
|
|
|
|
2023-10-13 13:37:55 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentMetadata(_ context.Context, arg database.GetWorkspaceAgentMetadataParams) ([]database.WorkspaceAgentMetadatum, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
metadata := make([]database.WorkspaceAgentMetadatum, 0)
|
|
|
|
for _, m := range q.workspaceAgentMetadata {
|
2023-10-13 13:37:55 +00:00
|
|
|
if m.WorkspaceAgentID == arg.WorkspaceAgentID {
|
|
|
|
if len(arg.Keys) > 0 && !slices.Contains(arg.Keys, m.Key) {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-28 15:57:23 +00:00
|
|
|
metadata = append(metadata, m)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return metadata, nil
|
|
|
|
}
|
|
|
|
|
2024-02-13 14:31:20 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentPortShare(_ context.Context, arg database.GetWorkspaceAgentPortShareParams) (database.WorkspaceAgentPortShare, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.WorkspaceAgentPortShare{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, share := range q.workspaceAgentPortShares {
|
|
|
|
if share.WorkspaceID == arg.WorkspaceID && share.AgentName == arg.AgentName && share.Port == arg.Port {
|
|
|
|
return share, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.WorkspaceAgentPortShare{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-09-25 21:47:17 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentScriptsByAgentIDs(_ context.Context, ids []uuid.UUID) ([]database.WorkspaceAgentScript, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
scripts := make([]database.WorkspaceAgentScript, 0)
|
|
|
|
for _, script := range q.workspaceAgentScripts {
|
|
|
|
for _, id := range ids {
|
|
|
|
if script.WorkspaceAgentID == id {
|
|
|
|
scripts = append(scripts, script)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return scripts, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentStats(_ context.Context, createdAfter time.Time) ([]database.GetWorkspaceAgentStatsRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
agentStatsCreatedAfter := make([]database.WorkspaceAgentStat, 0)
|
|
|
|
for _, agentStat := range q.workspaceAgentStats {
|
2023-08-09 19:50:26 +00:00
|
|
|
if agentStat.CreatedAt.After(createdAfter) || agentStat.CreatedAt.Equal(createdAfter) {
|
2023-06-12 22:40:58 +00:00
|
|
|
agentStatsCreatedAfter = append(agentStatsCreatedAfter, agentStat)
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
latestAgentStats := map[uuid.UUID]database.WorkspaceAgentStat{}
|
|
|
|
for _, agentStat := range q.workspaceAgentStats {
|
2023-08-09 19:50:26 +00:00
|
|
|
if agentStat.CreatedAt.After(createdAfter) || agentStat.CreatedAt.Equal(createdAfter) {
|
2023-06-12 22:40:58 +00:00
|
|
|
latestAgentStats[agentStat.AgentID] = agentStat
|
|
|
|
}
|
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
statByAgent := map[uuid.UUID]database.GetWorkspaceAgentStatsRow{}
|
2023-08-04 16:00:42 +00:00
|
|
|
for agentID, agentStat := range latestAgentStats {
|
|
|
|
stat := statByAgent[agentID]
|
|
|
|
stat.AgentID = agentStat.AgentID
|
|
|
|
stat.TemplateID = agentStat.TemplateID
|
|
|
|
stat.UserID = agentStat.UserID
|
|
|
|
stat.WorkspaceID = agentStat.WorkspaceID
|
2023-06-12 22:40:58 +00:00
|
|
|
stat.SessionCountVSCode += agentStat.SessionCountVSCode
|
|
|
|
stat.SessionCountJetBrains += agentStat.SessionCountJetBrains
|
|
|
|
stat.SessionCountReconnectingPTY += agentStat.SessionCountReconnectingPTY
|
|
|
|
stat.SessionCountSSH += agentStat.SessionCountSSH
|
|
|
|
statByAgent[stat.AgentID] = stat
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
latenciesByAgent := map[uuid.UUID][]float64{}
|
|
|
|
minimumDateByAgent := map[uuid.UUID]time.Time{}
|
|
|
|
for _, agentStat := range agentStatsCreatedAfter {
|
|
|
|
if agentStat.ConnectionMedianLatencyMS <= 0 {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
stat := statByAgent[agentStat.AgentID]
|
|
|
|
minimumDate := minimumDateByAgent[agentStat.AgentID]
|
|
|
|
if agentStat.CreatedAt.Before(minimumDate) || minimumDate.IsZero() {
|
|
|
|
minimumDateByAgent[agentStat.AgentID] = agentStat.CreatedAt
|
|
|
|
}
|
|
|
|
stat.WorkspaceRxBytes += agentStat.RxBytes
|
|
|
|
stat.WorkspaceTxBytes += agentStat.TxBytes
|
|
|
|
statByAgent[agentStat.AgentID] = stat
|
|
|
|
latenciesByAgent[agentStat.AgentID] = append(latenciesByAgent[agentStat.AgentID], agentStat.ConnectionMedianLatencyMS)
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
tryPercentile := func(fs []float64, p float64) float64 {
|
|
|
|
if len(fs) == 0 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
sort.Float64s(fs)
|
|
|
|
return fs[int(float64(len(fs))*p/100)]
|
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, stat := range statByAgent {
|
|
|
|
stat.AggregatedFrom = minimumDateByAgent[stat.AgentID]
|
|
|
|
statByAgent[stat.AgentID] = stat
|
|
|
|
|
|
|
|
latencies, ok := latenciesByAgent[stat.AgentID]
|
|
|
|
if !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
stat.WorkspaceConnectionLatency50 = tryPercentile(latencies, 50)
|
|
|
|
stat.WorkspaceConnectionLatency95 = tryPercentile(latencies, 95)
|
|
|
|
statByAgent[stat.AgentID] = stat
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
stats := make([]database.GetWorkspaceAgentStatsRow, 0, len(statByAgent))
|
|
|
|
for _, agent := range statByAgent {
|
|
|
|
stats = append(stats, agent)
|
|
|
|
}
|
|
|
|
return stats, nil
|
2022-01-23 05:58:10 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentStatsAndLabels(ctx context.Context, createdAfter time.Time) ([]database.GetWorkspaceAgentStatsAndLabelsRow, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
agentStatsCreatedAfter := make([]database.WorkspaceAgentStat, 0)
|
|
|
|
latestAgentStats := map[uuid.UUID]database.WorkspaceAgentStat{}
|
|
|
|
|
|
|
|
for _, agentStat := range q.workspaceAgentStats {
|
|
|
|
if agentStat.CreatedAt.After(createdAfter) {
|
|
|
|
agentStatsCreatedAfter = append(agentStatsCreatedAfter, agentStat)
|
|
|
|
latestAgentStats[agentStat.AgentID] = agentStat
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
statByAgent := map[uuid.UUID]database.GetWorkspaceAgentStatsAndLabelsRow{}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Session and connection metrics
|
|
|
|
for _, agentStat := range latestAgentStats {
|
|
|
|
stat := statByAgent[agentStat.AgentID]
|
|
|
|
stat.SessionCountVSCode += agentStat.SessionCountVSCode
|
|
|
|
stat.SessionCountJetBrains += agentStat.SessionCountJetBrains
|
|
|
|
stat.SessionCountReconnectingPTY += agentStat.SessionCountReconnectingPTY
|
|
|
|
stat.SessionCountSSH += agentStat.SessionCountSSH
|
|
|
|
stat.ConnectionCount += agentStat.ConnectionCount
|
|
|
|
if agentStat.ConnectionMedianLatencyMS >= 0 && stat.ConnectionMedianLatencyMS < agentStat.ConnectionMedianLatencyMS {
|
|
|
|
stat.ConnectionMedianLatencyMS = agentStat.ConnectionMedianLatencyMS
|
|
|
|
}
|
|
|
|
statByAgent[agentStat.AgentID] = stat
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Tx, Rx metrics
|
|
|
|
for _, agentStat := range agentStatsCreatedAfter {
|
|
|
|
stat := statByAgent[agentStat.AgentID]
|
|
|
|
stat.RxBytes += agentStat.RxBytes
|
|
|
|
stat.TxBytes += agentStat.TxBytes
|
|
|
|
statByAgent[agentStat.AgentID] = stat
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Labels
|
|
|
|
for _, agentStat := range agentStatsCreatedAfter {
|
|
|
|
stat := statByAgent[agentStat.AgentID]
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
user, err := q.getUserByIDNoLock(agentStat.UserID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
stat.Username = user.Username
|
|
|
|
|
|
|
|
workspace, err := q.getWorkspaceByIDNoLock(ctx, agentStat.WorkspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
stat.WorkspaceName = workspace.Name
|
|
|
|
|
|
|
|
agent, err := q.getWorkspaceAgentByIDNoLock(ctx, agentStat.AgentID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
stat.AgentName = agent.Name
|
|
|
|
|
|
|
|
statByAgent[agentStat.AgentID] = stat
|
|
|
|
}
|
|
|
|
|
|
|
|
stats := make([]database.GetWorkspaceAgentStatsAndLabelsRow, 0, len(statByAgent))
|
|
|
|
for _, agent := range statByAgent {
|
|
|
|
stats = append(stats, agent)
|
|
|
|
}
|
|
|
|
return stats, nil
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentsByResourceIDs(ctx context.Context, resourceIDs []uuid.UUID) ([]database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.getWorkspaceAgentsByResourceIDsNoLock(ctx, resourceIDs)
|
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentsCreatedAfter(_ context.Context, after time.Time) ([]database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
workspaceAgents := make([]database.WorkspaceAgent, 0)
|
|
|
|
for _, agent := range q.workspaceAgents {
|
|
|
|
if agent.CreatedAt.After(after) {
|
|
|
|
workspaceAgents = append(workspaceAgents, agent)
|
|
|
|
}
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return workspaceAgents, nil
|
2022-01-24 17:07:42 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAgentsInLatestBuildByWorkspaceID(ctx context.Context, workspaceID uuid.UUID) ([]database.WorkspaceAgent, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
// Get latest build for workspace.
|
|
|
|
workspaceBuild, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspaceID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get latest workspace build: %w", err)
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
// Get resources for build.
|
|
|
|
resources, err := q.getWorkspaceResourcesByJobIDNoLock(ctx, workspaceBuild.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get workspace resources: %w", err)
|
|
|
|
}
|
|
|
|
if len(resources) == 0 {
|
|
|
|
return []database.WorkspaceAgent{}, nil
|
|
|
|
}
|
2023-01-17 10:22:11 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
resourceIDs := make([]uuid.UUID, len(resources))
|
|
|
|
for i, resource := range resources {
|
|
|
|
resourceIDs[i] = resource.ID
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
agents, err := q.getWorkspaceAgentsByResourceIDsNoLock(ctx, resourceIDs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get workspace agents: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
return agents, nil
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAppByAgentIDAndSlug(ctx context.Context, arg database.GetWorkspaceAppByAgentIDAndSlugParams) (database.WorkspaceApp, error) {
|
2023-02-15 17:24:15 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.WorkspaceApp{}, err
|
2023-02-15 17:24:15 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-02-15 17:24:15 +00:00
|
|
|
|
2023-08-21 12:08:58 +00:00
|
|
|
return q.getWorkspaceAppByAgentIDAndSlugNoLock(ctx, arg)
|
2023-02-15 17:24:15 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAppsByAgentID(_ context.Context, id uuid.UUID) ([]database.WorkspaceApp, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
apps := make([]database.WorkspaceApp, 0)
|
|
|
|
for _, app := range q.workspaceApps {
|
|
|
|
if app.AgentID == id {
|
|
|
|
apps = append(apps, app)
|
|
|
|
}
|
2022-11-07 02:50:34 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apps, nil
|
2022-02-01 05:36:15 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAppsByAgentIDs(_ context.Context, ids []uuid.UUID) ([]database.WorkspaceApp, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
apps := make([]database.WorkspaceApp, 0)
|
|
|
|
for _, app := range q.workspaceApps {
|
|
|
|
for _, id := range ids {
|
|
|
|
if app.AgentID == id {
|
|
|
|
apps = append(apps, app)
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apps, nil
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceAppsCreatedAfter(_ context.Context, after time.Time) ([]database.WorkspaceApp, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
apps := make([]database.WorkspaceApp, 0)
|
|
|
|
for _, app := range q.workspaceApps {
|
|
|
|
if app.CreatedAt.After(after) {
|
|
|
|
apps = append(apps, app)
|
|
|
|
}
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return apps, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildByID(ctx context.Context, id uuid.UUID) (database.WorkspaceBuild, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return q.getWorkspaceBuildByIDNoLock(ctx, id)
|
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildByJobID(_ context.Context, jobID uuid.UUID) (database.WorkspaceBuild, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, build := range q.workspaceBuilds {
|
|
|
|
if build.JobID == jobID {
|
2023-07-25 13:14:38 +00:00
|
|
|
return q.workspaceBuildWithUserNoLock(build), nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.WorkspaceBuild{}, sql.ErrNoRows
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildByWorkspaceIDAndBuildNumber(_ context.Context, arg database.GetWorkspaceBuildByWorkspaceIDAndBuildNumberParams) (database.WorkspaceBuild, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.WorkspaceBuild{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
if workspaceBuild.WorkspaceID != arg.WorkspaceID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if workspaceBuild.BuildNumber != arg.BuildNumber {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return q.workspaceBuildWithUserNoLock(workspaceBuild), nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return database.WorkspaceBuild{}, sql.ErrNoRows
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildParameters(_ context.Context, workspaceBuildID uuid.UUID) ([]database.WorkspaceBuildParameter, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-02-28 18:00:52 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
params := make([]database.WorkspaceBuildParameter, 0)
|
|
|
|
for _, param := range q.workspaceBuildParameters {
|
|
|
|
if param.WorkspaceBuildID != workspaceBuildID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
params = append(params, param)
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return params, nil
|
2022-02-28 18:00:52 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 09:35:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildsByWorkspaceID(_ context.Context,
|
2023-06-12 22:40:58 +00:00
|
|
|
params database.GetWorkspaceBuildsByWorkspaceIDParams,
|
|
|
|
) ([]database.WorkspaceBuild, error) {
|
|
|
|
if err := validateDatabaseType(params); err != nil {
|
2023-01-23 11:14:47 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-08-01 21:53:05 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
history := make([]database.WorkspaceBuild, 0)
|
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
if workspaceBuild.CreatedAt.Before(params.Since) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if workspaceBuild.WorkspaceID == params.WorkspaceID {
|
2023-07-25 13:14:38 +00:00
|
|
|
history = append(history, q.workspaceBuildWithUserNoLock(workspaceBuild))
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-12-14 19:08:22 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
|
|
|
// Order by build_number
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(history, func(a, b database.WorkspaceBuild) int {
|
|
|
|
return slice.Descending(a.BuildNumber, b.BuildNumber)
|
2023-06-12 22:40:58 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
if params.AfterID != uuid.Nil {
|
|
|
|
found := false
|
|
|
|
for i, v := range history {
|
|
|
|
if v.ID == params.AfterID {
|
|
|
|
// We want to return all builds after index i.
|
|
|
|
history = history[i+1:]
|
|
|
|
found = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If no builds after the time, then we return an empty list.
|
|
|
|
if !found {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
2022-12-14 19:08:22 +00:00
|
|
|
}
|
2022-08-01 21:53:05 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if params.OffsetOpt > 0 {
|
|
|
|
if int(params.OffsetOpt) > len(history)-1 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
history = history[params.OffsetOpt:]
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if params.LimitOpt > 0 {
|
|
|
|
if int(params.LimitOpt) > len(history) {
|
|
|
|
params.LimitOpt = int32(len(history))
|
2023-03-03 19:09:04 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
history = history[:params.LimitOpt]
|
2023-03-03 19:09:04 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
if len(history) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return history, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceBuildsCreatedAfter(_ context.Context, after time.Time) ([]database.WorkspaceBuild, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
workspaceBuilds := make([]database.WorkspaceBuild, 0)
|
|
|
|
for _, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
if workspaceBuild.CreatedAt.After(after) {
|
2023-07-25 13:14:38 +00:00
|
|
|
workspaceBuilds = append(workspaceBuilds, q.workspaceBuildWithUserNoLock(workspaceBuild))
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return workspaceBuilds, nil
|
|
|
|
}
|
|
|
|
|
2023-12-13 17:45:43 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceByAgentID(ctx context.Context, agentID uuid.UUID) (database.GetWorkspaceByAgentIDRow, error) {
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-12-13 17:45:43 +00:00
|
|
|
w, err := q.getWorkspaceByAgentIDNoLock(ctx, agentID)
|
|
|
|
if err != nil {
|
|
|
|
return database.GetWorkspaceByAgentIDRow{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
tpl, err := q.getTemplateByIDNoLock(ctx, w.TemplateID)
|
|
|
|
if err != nil {
|
|
|
|
return database.GetWorkspaceByAgentIDRow{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.GetWorkspaceByAgentIDRow{
|
|
|
|
Workspace: w,
|
|
|
|
TemplateName: tpl.Name,
|
|
|
|
}, nil
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceByID(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
return q.getWorkspaceByIDNoLock(ctx, id)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceByOwnerIDAndName(_ context.Context, arg database.GetWorkspaceByOwnerIDAndNameParams) (database.Workspace, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.Workspace{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
var found *database.Workspace
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
workspace := workspace
|
|
|
|
if workspace.OwnerID != arg.OwnerID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if !strings.EqualFold(workspace.Name, arg.Name) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if workspace.Deleted != arg.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// Return the most recent workspace with the given name
|
|
|
|
if found == nil || workspace.CreatedAt.After(found.CreatedAt) {
|
|
|
|
found = &workspace
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if found != nil {
|
|
|
|
return *found, nil
|
|
|
|
}
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceByWorkspaceAppID(_ context.Context, workspaceAppID uuid.UUID) (database.Workspace, error) {
|
|
|
|
if err := validateDatabaseType(workspaceAppID); err != nil {
|
|
|
|
return database.Workspace{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, workspaceApp := range q.workspaceApps {
|
|
|
|
workspaceApp := workspaceApp
|
|
|
|
if workspaceApp.ID == workspaceAppID {
|
|
|
|
return q.getWorkspaceByAgentIDNoLock(context.Background(), workspaceApp.AgentID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceProxies(_ context.Context) ([]database.WorkspaceProxy, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
cpy := make([]database.WorkspaceProxy, 0, len(q.workspaceProxies))
|
|
|
|
|
|
|
|
for _, p := range q.workspaceProxies {
|
|
|
|
if !p.Deleted {
|
|
|
|
cpy = append(cpy, p)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return cpy, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceProxyByHostname(_ context.Context, params database.GetWorkspaceProxyByHostnameParams) (database.WorkspaceProxy, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
// Return zero rows if this is called with a non-sanitized hostname. The SQL
|
|
|
|
// version of this query does the same thing.
|
|
|
|
if !validProxyByHostnameRegex.MatchString(params.Hostname) {
|
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
// This regex matches the SQL version.
|
|
|
|
accessURLRegex := regexp.MustCompile(`[^:]*://` + regexp.QuoteMeta(params.Hostname) + `([:/]?.)*`)
|
|
|
|
|
|
|
|
for _, proxy := range q.workspaceProxies {
|
|
|
|
if proxy.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if params.AllowAccessUrl && accessURLRegex.MatchString(proxy.Url) {
|
|
|
|
return proxy, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Compile the app hostname regex. This is slow sadly.
|
|
|
|
if params.AllowWildcardHostname {
|
2024-01-17 16:41:42 +00:00
|
|
|
wildcardRegexp, err := appurl.CompileHostnamePattern(proxy.WildcardHostname)
|
2023-07-13 17:12:29 +00:00
|
|
|
if err != nil {
|
|
|
|
return database.WorkspaceProxy{}, xerrors.Errorf("compile hostname pattern %q for proxy %q (%s): %w", proxy.WildcardHostname, proxy.Name, proxy.ID.String(), err)
|
|
|
|
}
|
2024-01-17 16:41:42 +00:00
|
|
|
if _, ok := appurl.ExecuteHostnamePattern(wildcardRegexp, params.Hostname); ok {
|
2023-07-13 17:12:29 +00:00
|
|
|
return proxy, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceProxyByID(_ context.Context, id uuid.UUID) (database.WorkspaceProxy, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, proxy := range q.workspaceProxies {
|
|
|
|
if proxy.ID == id {
|
|
|
|
return proxy, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceProxyByName(_ context.Context, name string) (database.WorkspaceProxy, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, proxy := range q.workspaceProxies {
|
|
|
|
if proxy.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if proxy.Name == name {
|
|
|
|
return proxy, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceResourceByID(_ context.Context, id uuid.UUID) (database.WorkspaceResource, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
for _, resource := range q.workspaceResources {
|
|
|
|
if resource.ID == id {
|
|
|
|
return resource, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.WorkspaceResource{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceResourceMetadataByResourceIDs(_ context.Context, ids []uuid.UUID) ([]database.WorkspaceResourceMetadatum, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
metadata := make([]database.WorkspaceResourceMetadatum, 0)
|
|
|
|
for _, metadatum := range q.workspaceResourceMetadata {
|
|
|
|
for _, id := range ids {
|
|
|
|
if metadatum.WorkspaceResourceID == id {
|
|
|
|
metadata = append(metadata, metadatum)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return metadata, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspaceResourceMetadataCreatedAfter(ctx context.Context, after time.Time) ([]database.WorkspaceResourceMetadatum, error) {
|
|
|
|
resources, err := q.GetWorkspaceResourcesCreatedAfter(ctx, after)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
resourceIDs := map[uuid.UUID]struct{}{}
|
|
|
|
for _, resource := range resources {
|
|
|
|
resourceIDs[resource.ID] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
metadata := make([]database.WorkspaceResourceMetadatum, 0)
|
|
|
|
for _, m := range q.workspaceResourceMetadata {
|
|
|
|
_, ok := resourceIDs[m.WorkspaceResourceID]
|
|
|
|
if !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
metadata = append(metadata, m)
|
|
|
|
}
|
|
|
|
return metadata, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceResourcesByJobID(ctx context.Context, jobID uuid.UUID) ([]database.WorkspaceResource, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return q.getWorkspaceResourcesByJobIDNoLock(ctx, jobID)
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-10-18 03:07:11 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceResourcesByJobIDs(_ context.Context, jobIDs []uuid.UUID) ([]database.WorkspaceResource, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
resources := make([]database.WorkspaceResource, 0)
|
|
|
|
for _, resource := range q.workspaceResources {
|
|
|
|
for _, jobID := range jobIDs {
|
|
|
|
if resource.JobID != jobID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
resources = append(resources, resource)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return resources, nil
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceResourcesCreatedAfter(_ context.Context, after time.Time) ([]database.WorkspaceResource, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
resources := make([]database.WorkspaceResource, 0)
|
|
|
|
for _, resource := range q.workspaceResources {
|
|
|
|
if resource.CreatedAt.After(after) {
|
|
|
|
resources = append(resources, resource)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return resources, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-12-07 15:53:15 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaceUniqueOwnerCountByTemplateIDs(_ context.Context, templateIds []uuid.UUID) ([]database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
workspaceOwners := make(map[uuid.UUID]map[uuid.UUID]struct{})
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if workspace.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if !slices.Contains(templateIds, workspace.TemplateID) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
_, ok := workspaceOwners[workspace.TemplateID]
|
|
|
|
if !ok {
|
|
|
|
workspaceOwners[workspace.TemplateID] = make(map[uuid.UUID]struct{})
|
|
|
|
}
|
|
|
|
workspaceOwners[workspace.TemplateID][workspace.OwnerID] = struct{}{}
|
|
|
|
}
|
|
|
|
resp := make([]database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow, 0)
|
|
|
|
for _, templateID := range templateIds {
|
|
|
|
count := len(workspaceOwners[templateID])
|
|
|
|
resp = append(resp, database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow{
|
|
|
|
TemplateID: templateID,
|
|
|
|
UniqueOwnersSum: int64(count),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
return resp, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetWorkspaces(ctx context.Context, arg database.GetWorkspacesParams) ([]database.GetWorkspacesRow, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// A nil auth filter means no auth filter.
|
|
|
|
workspaceRows, err := q.GetAuthorizedWorkspaces(ctx, arg, nil)
|
|
|
|
return workspaceRows, err
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) GetWorkspacesEligibleForTransition(ctx context.Context, now time.Time) ([]database.Workspace, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2022-04-29 14:04:19 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces := []database.Workspace{}
|
2023-06-12 22:40:58 +00:00
|
|
|
for _, workspace := range q.workspaces {
|
2023-07-13 17:12:29 +00:00
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspace.ID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if build.Transition == database.WorkspaceTransitionStart &&
|
|
|
|
!build.Deadline.IsZero() &&
|
|
|
|
build.Deadline.Before(now) &&
|
2023-08-24 18:25:54 +00:00
|
|
|
!workspace.DormantAt.Valid {
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
2022-04-29 14:04:19 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
if build.Transition == database.WorkspaceTransitionStop &&
|
|
|
|
workspace.AutostartSchedule.Valid &&
|
2023-08-24 18:25:54 +00:00
|
|
|
!workspace.DormantAt.Valid {
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, build.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get provisioner job by ID: %w", err)
|
|
|
|
}
|
2023-10-05 01:57:46 +00:00
|
|
|
if codersdk.ProvisionerJobStatus(job.JobStatus) == codersdk.ProvisionerJobFailed {
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-04-29 14:04:19 +00:00
|
|
|
}
|
|
|
|
|
2023-09-01 13:21:18 +00:00
|
|
|
template, err := q.getTemplateByIDNoLock(ctx, workspace.TemplateID)
|
2023-07-13 17:12:29 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get template by ID: %w", err)
|
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
if !workspace.DormantAt.Valid && template.TimeTilDormant > 0 {
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
|
|
|
continue
|
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
if workspace.DormantAt.Valid && template.TimeTilDormantAutoDelete > 0 {
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
|
|
|
continue
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-04-29 14:04:19 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
return workspaces, nil
|
2022-04-29 14:04:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertAPIKey(_ context.Context, arg database.InsertAPIKeyParams) (database.APIKey, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.APIKey{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-04-12 14:05:21 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.LifetimeSeconds == 0 {
|
|
|
|
arg.LifetimeSeconds = 86400
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, u := range q.users {
|
|
|
|
if u.ID == arg.UserID && u.Deleted {
|
|
|
|
return database.APIKey{}, xerrors.Errorf("refusing to create APIKey for deleted user")
|
2022-04-12 14:05:21 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
key := database.APIKey{
|
|
|
|
ID: arg.ID,
|
|
|
|
LifetimeSeconds: arg.LifetimeSeconds,
|
|
|
|
HashedSecret: arg.HashedSecret,
|
|
|
|
IPAddress: arg.IPAddress,
|
|
|
|
UserID: arg.UserID,
|
|
|
|
ExpiresAt: arg.ExpiresAt,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
LastUsed: arg.LastUsed,
|
|
|
|
LoginType: arg.LoginType,
|
|
|
|
Scope: arg.Scope,
|
|
|
|
TokenName: arg.TokenName,
|
|
|
|
}
|
|
|
|
q.apiKeys = append(q.apiKeys, key)
|
|
|
|
return key, nil
|
2022-04-12 14:05:21 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertAllUsersGroup(ctx context.Context, orgID uuid.UUID) (database.Group, error) {
|
|
|
|
return q.InsertGroup(ctx, database.InsertGroupParams{
|
|
|
|
ID: orgID,
|
2023-08-17 18:25:16 +00:00
|
|
|
Name: database.EveryoneGroup,
|
2023-08-02 15:53:06 +00:00
|
|
|
DisplayName: "",
|
2023-07-13 17:12:29 +00:00
|
|
|
OrganizationID: orgID,
|
2023-10-03 08:23:45 +00:00
|
|
|
AvatarURL: "",
|
|
|
|
QuotaAllowance: 0,
|
2023-07-13 17:12:29 +00:00
|
|
|
})
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertAuditLog(_ context.Context, arg database.InsertAuditLogParams) (database.AuditLog, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.AuditLog{}, err
|
|
|
|
}
|
2022-04-26 14:00:07 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
alog := database.AuditLog(arg)
|
|
|
|
|
|
|
|
q.auditLogs = append(q.auditLogs, alog)
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(q.auditLogs, func(a, b database.AuditLog) int {
|
|
|
|
if a.Time.Before(b.Time) {
|
|
|
|
return -1
|
|
|
|
} else if a.Time.Equal(b.Time) {
|
|
|
|
return 0
|
|
|
|
}
|
2023-12-12 10:02:32 +00:00
|
|
|
return 1
|
2023-07-13 17:12:29 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
return alog, nil
|
|
|
|
}
|
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
func (q *FakeQuerier) InsertDBCryptKey(_ context.Context, arg database.InsertDBCryptKeyParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, key := range q.dbcryptKeys {
|
|
|
|
if key.Number == arg.Number {
|
|
|
|
return errDuplicateKey
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
q.dbcryptKeys = append(q.dbcryptKeys, database.DBCryptKey{
|
|
|
|
Number: arg.Number,
|
|
|
|
ActiveKeyDigest: sql.NullString{String: arg.ActiveKeyDigest, Valid: true},
|
|
|
|
Test: arg.Test,
|
|
|
|
})
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertDERPMeshKey(_ context.Context, id string) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
q.derpMeshKey = id
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) InsertDeploymentID(_ context.Context, id string) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
q.deploymentID = id
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-09-29 19:13:20 +00:00
|
|
|
func (q *FakeQuerier) InsertExternalAuthLink(_ context.Context, arg database.InsertExternalAuthLinkParams) (database.ExternalAuthLink, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.ExternalAuthLink{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
// nolint:gosimple
|
|
|
|
gitAuthLink := database.ExternalAuthLink{
|
|
|
|
ProviderID: arg.ProviderID,
|
|
|
|
UserID: arg.UserID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
OAuthAccessToken: arg.OAuthAccessToken,
|
|
|
|
OAuthAccessTokenKeyID: arg.OAuthAccessTokenKeyID,
|
|
|
|
OAuthRefreshToken: arg.OAuthRefreshToken,
|
|
|
|
OAuthRefreshTokenKeyID: arg.OAuthRefreshTokenKeyID,
|
|
|
|
OAuthExpiry: arg.OAuthExpiry,
|
2023-10-09 23:49:30 +00:00
|
|
|
OAuthExtra: arg.OAuthExtra,
|
2023-09-29 19:13:20 +00:00
|
|
|
}
|
|
|
|
q.externalAuthLinks = append(q.externalAuthLinks, gitAuthLink)
|
|
|
|
return gitAuthLink, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertFile(_ context.Context, arg database.InsertFileParams) (database.File, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.File{}, err
|
2022-04-26 14:00:07 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
file := database.File{
|
|
|
|
ID: arg.ID,
|
|
|
|
Hash: arg.Hash,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
CreatedBy: arg.CreatedBy,
|
|
|
|
Mimetype: arg.Mimetype,
|
|
|
|
Data: arg.Data,
|
|
|
|
}
|
|
|
|
q.files = append(q.files, file)
|
|
|
|
return file, nil
|
2022-04-26 14:00:07 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertGitSSHKey(_ context.Context, arg database.InsertGitSSHKeyParams) (database.GitSSHKey, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.GitSSHKey{}, err
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
|
|
|
gitSSHKey := database.GitSSHKey{
|
|
|
|
UserID: arg.UserID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
PrivateKey: arg.PrivateKey,
|
|
|
|
PublicKey: arg.PublicKey,
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.gitSSHKey = append(q.gitSSHKey, gitSSHKey)
|
|
|
|
return gitSSHKey, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-09-26 15:31:03 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertGroup(_ context.Context, arg database.InsertGroupParams) (database.Group, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.Group{}, err
|
2022-09-26 15:31:03 +00:00
|
|
|
}
|
|
|
|
|
2022-05-06 14:20:08 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, group := range q.groups {
|
|
|
|
if group.OrganizationID == arg.OrganizationID &&
|
|
|
|
group.Name == arg.Name {
|
|
|
|
return database.Group{}, errDuplicateKey
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-05-06 14:20:08 +00:00
|
|
|
}
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
|
|
|
group := database.Group{
|
|
|
|
ID: arg.ID,
|
|
|
|
Name: arg.Name,
|
2023-08-02 15:53:06 +00:00
|
|
|
DisplayName: arg.DisplayName,
|
2023-07-13 17:12:29 +00:00
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
AvatarURL: arg.AvatarURL,
|
|
|
|
QuotaAllowance: arg.QuotaAllowance,
|
2023-08-08 16:37:49 +00:00
|
|
|
Source: database.GroupSourceUser,
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.groups = append(q.groups, group)
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return group, nil
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertGroupMember(_ context.Context, arg database.InsertGroupMemberParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-01-17 10:22:11 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, member := range q.groupMembers {
|
|
|
|
if member.GroupID == arg.GroupID &&
|
|
|
|
member.UserID == arg.UserID {
|
|
|
|
return errDuplicateKey
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-01-17 10:22:11 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
|
|
|
q.groupMembers = append(q.groupMembers, database.GroupMember{
|
|
|
|
GroupID: arg.GroupID,
|
|
|
|
UserID: arg.UserID,
|
|
|
|
})
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-06-04 20:13:37 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertLicense(
|
|
|
|
_ context.Context, arg database.InsertLicenseParams,
|
|
|
|
) (database.License, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.License{}, err
|
|
|
|
}
|
2022-10-14 16:46:38 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
l := database.License{
|
|
|
|
ID: q.lastLicenseID + 1,
|
|
|
|
UploadedAt: arg.UploadedAt,
|
|
|
|
JWT: arg.JWT,
|
|
|
|
Exp: arg.Exp,
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.lastLicenseID = l.ID
|
|
|
|
q.licenses = append(q.licenses, l)
|
|
|
|
return l, nil
|
2022-06-04 20:13:37 +00:00
|
|
|
}
|
|
|
|
|
2023-08-08 16:37:49 +00:00
|
|
|
func (q *FakeQuerier) InsertMissingGroups(_ context.Context, arg database.InsertMissingGroupsParams) ([]database.Group, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
groupNameMap := make(map[string]struct{})
|
|
|
|
for _, g := range arg.GroupNames {
|
|
|
|
groupNameMap[g] = struct{}{}
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, g := range q.groups {
|
|
|
|
if g.OrganizationID != arg.OrganizationID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
delete(groupNameMap, g.Name)
|
|
|
|
}
|
|
|
|
|
|
|
|
newGroups := make([]database.Group, 0, len(groupNameMap))
|
|
|
|
for k := range groupNameMap {
|
|
|
|
g := database.Group{
|
|
|
|
ID: uuid.New(),
|
|
|
|
Name: k,
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
AvatarURL: "",
|
|
|
|
QuotaAllowance: 0,
|
|
|
|
DisplayName: "",
|
|
|
|
Source: arg.Source,
|
|
|
|
}
|
|
|
|
q.groups = append(q.groups, g)
|
|
|
|
newGroups = append(newGroups, g)
|
|
|
|
}
|
|
|
|
|
|
|
|
return newGroups, nil
|
|
|
|
}
|
|
|
|
|
2023-12-21 21:38:42 +00:00
|
|
|
func (q *FakeQuerier) InsertOAuth2ProviderApp(_ context.Context, arg database.InsertOAuth2ProviderAppParams) (database.OAuth2ProviderApp, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.OAuth2ProviderApp{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, app := range q.oauth2ProviderApps {
|
|
|
|
if app.Name == arg.Name {
|
|
|
|
return database.OAuth2ProviderApp{}, errDuplicateKey
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
//nolint:gosimple // Go wants database.OAuth2ProviderApp(arg), but we cannot be sure the structs will remain identical.
|
|
|
|
app := database.OAuth2ProviderApp{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Name: arg.Name,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
CallbackURL: arg.CallbackURL,
|
|
|
|
}
|
|
|
|
q.oauth2ProviderApps = append(q.oauth2ProviderApps, app)
|
|
|
|
|
|
|
|
return app, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) InsertOAuth2ProviderAppSecret(_ context.Context, arg database.InsertOAuth2ProviderAppSecretParams) (database.OAuth2ProviderAppSecret, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.OAuth2ProviderAppSecret{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, app := range q.oauth2ProviderApps {
|
|
|
|
if app.ID == arg.AppID {
|
|
|
|
secret := database.OAuth2ProviderAppSecret{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
HashedSecret: arg.HashedSecret,
|
|
|
|
DisplaySecret: arg.DisplaySecret,
|
|
|
|
AppID: arg.AppID,
|
|
|
|
}
|
|
|
|
q.oauth2ProviderAppSecrets = append(q.oauth2ProviderAppSecrets, secret)
|
|
|
|
return secret, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.OAuth2ProviderAppSecret{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertOrganization(_ context.Context, arg database.InsertOrganizationParams) (database.Organization, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.Organization{}, err
|
|
|
|
}
|
2022-09-23 19:51:04 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
organization := database.Organization{
|
|
|
|
ID: arg.ID,
|
|
|
|
Name: arg.Name,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
2022-09-23 19:51:04 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.organizations = append(q.organizations, organization)
|
|
|
|
return organization, nil
|
2022-09-23 19:51:04 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertOrganizationMember(_ context.Context, arg database.InsertOrganizationMemberParams) (database.OrganizationMember, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.OrganizationMember{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
organizationMember := database.OrganizationMember{
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
UserID: arg.UserID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Roles: arg.Roles,
|
|
|
|
}
|
|
|
|
q.organizationMembers = append(q.organizationMembers, organizationMember)
|
|
|
|
return organizationMember, nil
|
2022-01-20 13:46:51 +00:00
|
|
|
}
|
2022-01-25 19:52:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertProvisionerJob(_ context.Context, arg database.InsertProvisionerJobParams) (database.ProvisionerJob, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.ProvisionerJob{}, err
|
|
|
|
}
|
2023-06-22 04:33:22 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-07-03 02:29:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
job := database.ProvisionerJob{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
InitiatorID: arg.InitiatorID,
|
|
|
|
Provisioner: arg.Provisioner,
|
|
|
|
StorageMethod: arg.StorageMethod,
|
|
|
|
FileID: arg.FileID,
|
|
|
|
Type: arg.Type,
|
|
|
|
Input: arg.Input,
|
2023-12-14 18:23:29 +00:00
|
|
|
Tags: maps.Clone(arg.Tags),
|
2023-10-18 20:08:02 +00:00
|
|
|
TraceMetadata: arg.TraceMetadata,
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-10-05 01:57:46 +00:00
|
|
|
job.JobStatus = provisonerJobStatus(job)
|
2023-07-13 17:12:29 +00:00
|
|
|
q.provisionerJobs = append(q.provisionerJobs, job)
|
|
|
|
return job, nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertProvisionerJobLogs(_ context.Context, arg database.InsertProvisionerJobLogsParams) ([]database.ProvisionerJobLog, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-10-19 07:00:45 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
logs := make([]database.ProvisionerJobLog, 0)
|
|
|
|
id := int64(1)
|
|
|
|
if len(q.provisionerJobLogs) > 0 {
|
|
|
|
id = q.provisionerJobLogs[len(q.provisionerJobLogs)-1].ID
|
|
|
|
}
|
|
|
|
for index, output := range arg.Output {
|
|
|
|
id++
|
|
|
|
logs = append(logs, database.ProvisionerJobLog{
|
|
|
|
ID: id,
|
|
|
|
JobID: arg.JobID,
|
|
|
|
CreatedAt: arg.CreatedAt[index],
|
|
|
|
Source: arg.Source[index],
|
|
|
|
Level: arg.Level[index],
|
|
|
|
Stage: arg.Stage[index],
|
|
|
|
Output: output,
|
|
|
|
})
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.provisionerJobLogs = append(q.provisionerJobLogs, logs...)
|
|
|
|
return logs, nil
|
|
|
|
}
|
2022-10-19 07:00:45 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertReplica(_ context.Context, arg database.InsertReplicaParams) (database.Replica, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.Replica{}, err
|
2022-10-19 07:00:45 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
replica := database.Replica{
|
2023-06-12 22:40:58 +00:00
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
2023-07-13 17:12:29 +00:00
|
|
|
StartedAt: arg.StartedAt,
|
2023-06-12 22:40:58 +00:00
|
|
|
UpdatedAt: arg.UpdatedAt,
|
2023-07-13 17:12:29 +00:00
|
|
|
Hostname: arg.Hostname,
|
|
|
|
RegionID: arg.RegionID,
|
|
|
|
RelayAddress: arg.RelayAddress,
|
|
|
|
Version: arg.Version,
|
|
|
|
DatabaseLatency: arg.DatabaseLatency,
|
2023-07-26 16:21:04 +00:00
|
|
|
Primary: arg.Primary,
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.replicas = append(q.replicas, replica)
|
|
|
|
return replica, nil
|
2022-10-19 07:00:45 +00:00
|
|
|
}
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
func (q *FakeQuerier) InsertTemplate(_ context.Context, arg database.InsertTemplateParams) error {
|
2023-07-13 17:12:29 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-19 20:07:33 +00:00
|
|
|
return err
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
//nolint:gosimple
|
2023-07-19 20:07:33 +00:00
|
|
|
template := database.TemplateTable{
|
2023-07-13 17:12:29 +00:00
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
Name: arg.Name,
|
|
|
|
Provisioner: arg.Provisioner,
|
|
|
|
ActiveVersionID: arg.ActiveVersionID,
|
|
|
|
Description: arg.Description,
|
|
|
|
CreatedBy: arg.CreatedBy,
|
|
|
|
UserACL: arg.UserACL,
|
|
|
|
GroupACL: arg.GroupACL,
|
|
|
|
DisplayName: arg.DisplayName,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
AllowUserCancelWorkspaceJobs: arg.AllowUserCancelWorkspaceJobs,
|
|
|
|
AllowUserAutostart: true,
|
|
|
|
AllowUserAutostop: true,
|
2024-02-13 14:31:20 +00:00
|
|
|
MaxPortSharingLevel: arg.MaxPortSharingLevel,
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
q.templates = append(q.templates, template)
|
2023-07-19 20:07:33 +00:00
|
|
|
return nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
func (q *FakeQuerier) InsertTemplateVersion(_ context.Context, arg database.InsertTemplateVersionParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-25 13:14:38 +00:00
|
|
|
return err
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if len(arg.Message) > 1048576 {
|
2023-07-25 13:14:38 +00:00
|
|
|
return xerrors.New("message too long")
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-07 17:40:54 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
2023-07-25 13:14:38 +00:00
|
|
|
version := database.TemplateVersionTable{
|
2023-07-13 17:12:29 +00:00
|
|
|
ID: arg.ID,
|
|
|
|
TemplateID: arg.TemplateID,
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Name: arg.Name,
|
|
|
|
Message: arg.Message,
|
|
|
|
Readme: arg.Readme,
|
|
|
|
JobID: arg.JobID,
|
|
|
|
CreatedBy: arg.CreatedBy,
|
|
|
|
}
|
|
|
|
q.templateVersions = append(q.templateVersions, version)
|
2023-07-25 13:14:38 +00:00
|
|
|
return nil
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
2022-03-07 17:40:54 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertTemplateVersionParameter(_ context.Context, arg database.InsertTemplateVersionParameterParams) (database.TemplateVersionParameter, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.TemplateVersionParameter{}, err
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
param := database.TemplateVersionParameter{
|
|
|
|
TemplateVersionID: arg.TemplateVersionID,
|
|
|
|
Name: arg.Name,
|
|
|
|
DisplayName: arg.DisplayName,
|
|
|
|
Description: arg.Description,
|
|
|
|
Type: arg.Type,
|
|
|
|
Mutable: arg.Mutable,
|
|
|
|
DefaultValue: arg.DefaultValue,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
Options: arg.Options,
|
|
|
|
ValidationError: arg.ValidationError,
|
|
|
|
ValidationRegex: arg.ValidationRegex,
|
|
|
|
ValidationMin: arg.ValidationMin,
|
|
|
|
ValidationMax: arg.ValidationMax,
|
|
|
|
ValidationMonotonic: arg.ValidationMonotonic,
|
|
|
|
Required: arg.Required,
|
|
|
|
DisplayOrder: arg.DisplayOrder,
|
|
|
|
Ephemeral: arg.Ephemeral,
|
|
|
|
}
|
|
|
|
q.templateVersionParameters = append(q.templateVersionParameters, param)
|
|
|
|
return param, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertTemplateVersionVariable(_ context.Context, arg database.InsertTemplateVersionVariableParams) (database.TemplateVersionVariable, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.TemplateVersionVariable{}, err
|
|
|
|
}
|
|
|
|
|
2022-05-17 20:00:48 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
|
|
|
variable := database.TemplateVersionVariable{
|
|
|
|
TemplateVersionID: arg.TemplateVersionID,
|
|
|
|
Name: arg.Name,
|
|
|
|
Description: arg.Description,
|
|
|
|
Type: arg.Type,
|
|
|
|
Value: arg.Value,
|
|
|
|
DefaultValue: arg.DefaultValue,
|
|
|
|
Required: arg.Required,
|
|
|
|
Sensitive: arg.Sensitive,
|
|
|
|
}
|
|
|
|
q.templateVersionVariables = append(q.templateVersionVariables, variable)
|
|
|
|
return variable, nil
|
2022-05-17 20:00:48 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertUser(_ context.Context, arg database.InsertUserParams) (database.User, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.User{}, err
|
|
|
|
}
|
|
|
|
|
2023-10-30 17:42:20 +00:00
|
|
|
// There is a common bug when using dbmem that 2 inserted users have the
|
2023-07-13 17:12:29 +00:00
|
|
|
// same created_at time. This causes user order to not be deterministic,
|
|
|
|
// which breaks some unit tests.
|
|
|
|
// To fix this, we make sure that the created_at time is always greater
|
|
|
|
// than the last user's created_at time.
|
|
|
|
allUsers, _ := q.GetUsers(context.Background(), database.GetUsersParams{})
|
|
|
|
if len(allUsers) > 0 {
|
|
|
|
lastUser := allUsers[len(allUsers)-1]
|
|
|
|
if arg.CreatedAt.Before(lastUser.CreatedAt) ||
|
|
|
|
arg.CreatedAt.Equal(lastUser.CreatedAt) {
|
|
|
|
// 1 ms is a good enough buffer.
|
|
|
|
arg.CreatedAt = lastUser.CreatedAt.Add(time.Millisecond)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-27 16:18:19 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, user := range q.users {
|
|
|
|
if user.Username == arg.Username && !user.Deleted {
|
|
|
|
return database.User{}, errDuplicateKey
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
user := database.User{
|
|
|
|
ID: arg.ID,
|
|
|
|
Email: arg.Email,
|
|
|
|
HashedPassword: arg.HashedPassword,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Username: arg.Username,
|
2023-08-02 14:31:25 +00:00
|
|
|
Status: database.UserStatusDormant,
|
2023-07-13 17:12:29 +00:00
|
|
|
RBACRoles: arg.RBACRoles,
|
|
|
|
LoginType: arg.LoginType,
|
|
|
|
}
|
|
|
|
q.users = append(q.users, user)
|
|
|
|
return user, nil
|
2023-02-27 16:18:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertUserGroupsByName(_ context.Context, arg database.InsertUserGroupsByNameParams) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
var groupIDs []uuid.UUID
|
|
|
|
for _, group := range q.groups {
|
|
|
|
for _, groupName := range arg.GroupNames {
|
|
|
|
if group.Name == groupName {
|
|
|
|
groupIDs = append(groupIDs, group.ID)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, groupID := range groupIDs {
|
|
|
|
q.groupMembers = append(q.groupMembers, database.GroupMember{
|
|
|
|
UserID: arg.UserID,
|
|
|
|
GroupID: groupID,
|
|
|
|
})
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) InsertUserLink(_ context.Context, args database.InsertUserLinkParams) (database.UserLink, error) {
|
2022-03-07 17:40:54 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
//nolint:gosimple
|
2023-07-13 17:12:29 +00:00
|
|
|
link := database.UserLink{
|
2023-09-06 11:06:26 +00:00
|
|
|
UserID: args.UserID,
|
|
|
|
LoginType: args.LoginType,
|
|
|
|
LinkedID: args.LinkedID,
|
|
|
|
OAuthAccessToken: args.OAuthAccessToken,
|
|
|
|
OAuthAccessTokenKeyID: args.OAuthAccessTokenKeyID,
|
|
|
|
OAuthRefreshToken: args.OAuthRefreshToken,
|
|
|
|
OAuthRefreshTokenKeyID: args.OAuthRefreshTokenKeyID,
|
|
|
|
OAuthExpiry: args.OAuthExpiry,
|
2023-11-27 16:47:23 +00:00
|
|
|
DebugContext: args.DebugContext,
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
q.userLinks = append(q.userLinks, link)
|
|
|
|
|
|
|
|
return link, nil
|
2022-03-07 17:40:54 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspace(_ context.Context, arg database.InsertWorkspaceParams) (database.Workspace, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Workspace{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-08-31 15:33:50 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
workspace := database.Workspace{
|
|
|
|
ID: arg.ID,
|
2023-06-12 22:40:58 +00:00
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
2023-07-13 17:12:29 +00:00
|
|
|
OwnerID: arg.OwnerID,
|
|
|
|
OrganizationID: arg.OrganizationID,
|
|
|
|
TemplateID: arg.TemplateID,
|
|
|
|
Name: arg.Name,
|
|
|
|
AutostartSchedule: arg.AutostartSchedule,
|
|
|
|
Ttl: arg.Ttl,
|
|
|
|
LastUsedAt: arg.LastUsedAt,
|
2023-10-06 09:27:12 +00:00
|
|
|
AutomaticUpdates: arg.AutomaticUpdates,
|
2022-08-31 15:33:50 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaces = append(q.workspaces, workspace)
|
|
|
|
return workspace, nil
|
2022-08-31 15:33:50 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgent(_ context.Context, arg database.InsertWorkspaceAgentParams) (database.WorkspaceAgent, error) {
|
2023-03-23 19:09:13 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.WorkspaceAgent{}, err
|
2023-03-23 19:09:13 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-03-23 19:09:13 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
agent := database.WorkspaceAgent{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
ResourceID: arg.ResourceID,
|
|
|
|
AuthToken: arg.AuthToken,
|
|
|
|
AuthInstanceID: arg.AuthInstanceID,
|
|
|
|
EnvironmentVariables: arg.EnvironmentVariables,
|
|
|
|
Name: arg.Name,
|
|
|
|
Architecture: arg.Architecture,
|
|
|
|
OperatingSystem: arg.OperatingSystem,
|
|
|
|
Directory: arg.Directory,
|
|
|
|
InstanceMetadata: arg.InstanceMetadata,
|
|
|
|
ResourceMetadata: arg.ResourceMetadata,
|
|
|
|
ConnectionTimeoutSeconds: arg.ConnectionTimeoutSeconds,
|
|
|
|
TroubleshootingURL: arg.TroubleshootingURL,
|
|
|
|
MOTDFile: arg.MOTDFile,
|
|
|
|
LifecycleState: database.WorkspaceAgentLifecycleStateCreated,
|
2023-08-30 19:53:42 +00:00
|
|
|
DisplayApps: arg.DisplayApps,
|
2023-03-23 19:09:13 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgents = append(q.workspaceAgents, agent)
|
|
|
|
return agent, nil
|
|
|
|
}
|
2023-03-23 19:09:13 +00:00
|
|
|
|
2023-09-25 21:47:17 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentLogSources(_ context.Context, arg database.InsertWorkspaceAgentLogSourcesParams) ([]database.WorkspaceAgentLogSource, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
logSources := make([]database.WorkspaceAgentLogSource, 0)
|
|
|
|
for index, source := range arg.ID {
|
|
|
|
logSource := database.WorkspaceAgentLogSource{
|
|
|
|
ID: source,
|
|
|
|
WorkspaceAgentID: arg.WorkspaceAgentID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
DisplayName: arg.DisplayName[index],
|
|
|
|
Icon: arg.Icon[index],
|
|
|
|
}
|
|
|
|
logSources = append(logSources, logSource)
|
|
|
|
}
|
|
|
|
q.workspaceAgentLogSources = append(q.workspaceAgentLogSources, logSources...)
|
|
|
|
return logSources, nil
|
|
|
|
}
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentLogs(_ context.Context, arg database.InsertWorkspaceAgentLogsParams) ([]database.WorkspaceAgentLog, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
logs := []database.WorkspaceAgentLog{}
|
2023-07-13 17:12:29 +00:00
|
|
|
id := int64(0)
|
|
|
|
if len(q.workspaceAgentLogs) > 0 {
|
|
|
|
id = q.workspaceAgentLogs[len(q.workspaceAgentLogs)-1].ID
|
|
|
|
}
|
|
|
|
outputLength := int32(0)
|
|
|
|
for index, output := range arg.Output {
|
|
|
|
id++
|
2023-07-28 15:57:23 +00:00
|
|
|
logs = append(logs, database.WorkspaceAgentLog{
|
2023-09-25 21:47:17 +00:00
|
|
|
ID: id,
|
|
|
|
AgentID: arg.AgentID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
Level: arg.Level[index],
|
|
|
|
LogSourceID: arg.LogSourceID,
|
|
|
|
Output: output,
|
2023-07-13 17:12:29 +00:00
|
|
|
})
|
|
|
|
outputLength += int32(len(output))
|
|
|
|
}
|
|
|
|
for index, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID != arg.AgentID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
// Greater than 1MB, same as the PostgreSQL constraint!
|
2023-07-28 15:57:23 +00:00
|
|
|
if agent.LogsLength+outputLength > (1 << 20) {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, &pq.Error{
|
2023-07-28 15:57:23 +00:00
|
|
|
Constraint: "max_logs_length",
|
2023-07-13 17:12:29 +00:00
|
|
|
Table: "workspace_agents",
|
|
|
|
}
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2023-07-28 15:57:23 +00:00
|
|
|
agent.LogsLength += outputLength
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgents[index] = agent
|
|
|
|
break
|
2022-02-04 01:13:22 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgentLogs = append(q.workspaceAgentLogs, logs...)
|
|
|
|
return logs, nil
|
2022-02-04 01:13:22 +00:00
|
|
|
}
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentMetadata(_ context.Context, arg database.InsertWorkspaceAgentMetadataParams) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
metadatum := database.WorkspaceAgentMetadatum{
|
|
|
|
WorkspaceAgentID: arg.WorkspaceAgentID,
|
|
|
|
Script: arg.Script,
|
|
|
|
DisplayName: arg.DisplayName,
|
|
|
|
Key: arg.Key,
|
|
|
|
Timeout: arg.Timeout,
|
|
|
|
Interval: arg.Interval,
|
2024-02-08 16:29:34 +00:00
|
|
|
DisplayOrder: arg.DisplayOrder,
|
2023-07-28 15:57:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
q.workspaceAgentMetadata = append(q.workspaceAgentMetadata, metadatum)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-09-25 21:47:17 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentScripts(_ context.Context, arg database.InsertWorkspaceAgentScriptsParams) ([]database.WorkspaceAgentScript, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
scripts := make([]database.WorkspaceAgentScript, 0)
|
|
|
|
for index, source := range arg.LogSourceID {
|
|
|
|
script := database.WorkspaceAgentScript{
|
|
|
|
LogSourceID: source,
|
|
|
|
WorkspaceAgentID: arg.WorkspaceAgentID,
|
|
|
|
LogPath: arg.LogPath[index],
|
|
|
|
Script: arg.Script[index],
|
|
|
|
Cron: arg.Cron[index],
|
|
|
|
StartBlocksLogin: arg.StartBlocksLogin[index],
|
|
|
|
RunOnStart: arg.RunOnStart[index],
|
|
|
|
RunOnStop: arg.RunOnStop[index],
|
|
|
|
TimeoutSeconds: arg.TimeoutSeconds[index],
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
}
|
|
|
|
scripts = append(scripts, script)
|
|
|
|
}
|
|
|
|
q.workspaceAgentScripts = append(q.workspaceAgentScripts, scripts...)
|
|
|
|
return scripts, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentStat(_ context.Context, p database.InsertWorkspaceAgentStatParams) (database.WorkspaceAgentStat, error) {
|
|
|
|
if err := validateDatabaseType(p); err != nil {
|
|
|
|
return database.WorkspaceAgentStat{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
stat := database.WorkspaceAgentStat{
|
|
|
|
ID: p.ID,
|
|
|
|
CreatedAt: p.CreatedAt,
|
|
|
|
WorkspaceID: p.WorkspaceID,
|
|
|
|
AgentID: p.AgentID,
|
|
|
|
UserID: p.UserID,
|
|
|
|
ConnectionsByProto: p.ConnectionsByProto,
|
|
|
|
ConnectionCount: p.ConnectionCount,
|
|
|
|
RxPackets: p.RxPackets,
|
|
|
|
RxBytes: p.RxBytes,
|
|
|
|
TxPackets: p.TxPackets,
|
|
|
|
TxBytes: p.TxBytes,
|
|
|
|
TemplateID: p.TemplateID,
|
|
|
|
SessionCountVSCode: p.SessionCountVSCode,
|
|
|
|
SessionCountJetBrains: p.SessionCountJetBrains,
|
|
|
|
SessionCountReconnectingPTY: p.SessionCountReconnectingPTY,
|
|
|
|
SessionCountSSH: p.SessionCountSSH,
|
|
|
|
ConnectionMedianLatencyMS: p.ConnectionMedianLatencyMS,
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgentStats = append(q.workspaceAgentStats, stat)
|
|
|
|
return stat, nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
|
|
|
|
2023-08-04 16:00:42 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAgentStats(_ context.Context, arg database.InsertWorkspaceAgentStatsParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
var connectionsByProto []map[string]int64
|
|
|
|
if err := json.Unmarshal(arg.ConnectionsByProto, &connectionsByProto); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
for i := 0; i < len(arg.ID); i++ {
|
|
|
|
cbp, err := json.Marshal(connectionsByProto[i])
|
|
|
|
if err != nil {
|
|
|
|
return xerrors.Errorf("failed to marshal connections_by_proto: %w", err)
|
|
|
|
}
|
|
|
|
stat := database.WorkspaceAgentStat{
|
|
|
|
ID: arg.ID[i],
|
|
|
|
CreatedAt: arg.CreatedAt[i],
|
|
|
|
WorkspaceID: arg.WorkspaceID[i],
|
|
|
|
AgentID: arg.AgentID[i],
|
|
|
|
UserID: arg.UserID[i],
|
|
|
|
ConnectionsByProto: cbp,
|
|
|
|
ConnectionCount: arg.ConnectionCount[i],
|
|
|
|
RxPackets: arg.RxPackets[i],
|
|
|
|
RxBytes: arg.RxBytes[i],
|
|
|
|
TxPackets: arg.TxPackets[i],
|
|
|
|
TxBytes: arg.TxBytes[i],
|
|
|
|
TemplateID: arg.TemplateID[i],
|
|
|
|
SessionCountVSCode: arg.SessionCountVSCode[i],
|
|
|
|
SessionCountJetBrains: arg.SessionCountJetBrains[i],
|
|
|
|
SessionCountReconnectingPTY: arg.SessionCountReconnectingPTY[i],
|
|
|
|
SessionCountSSH: arg.SessionCountSSH[i],
|
|
|
|
ConnectionMedianLatencyMS: arg.ConnectionMedianLatencyMS[i],
|
|
|
|
}
|
|
|
|
q.workspaceAgentStats = append(q.workspaceAgentStats, stat)
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceApp(_ context.Context, arg database.InsertWorkspaceAppParams) (database.WorkspaceApp, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.WorkspaceApp{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-02-04 01:13:22 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.SharingLevel == "" {
|
|
|
|
arg.SharingLevel = database.AppSharingLevelOwner
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
// nolint:gosimple
|
|
|
|
workspaceApp := database.WorkspaceApp{
|
|
|
|
ID: arg.ID,
|
|
|
|
AgentID: arg.AgentID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
Slug: arg.Slug,
|
|
|
|
DisplayName: arg.DisplayName,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
Command: arg.Command,
|
|
|
|
Url: arg.Url,
|
|
|
|
External: arg.External,
|
|
|
|
Subdomain: arg.Subdomain,
|
|
|
|
SharingLevel: arg.SharingLevel,
|
|
|
|
HealthcheckUrl: arg.HealthcheckUrl,
|
|
|
|
HealthcheckInterval: arg.HealthcheckInterval,
|
|
|
|
HealthcheckThreshold: arg.HealthcheckThreshold,
|
|
|
|
Health: arg.Health,
|
2024-02-12 14:11:31 +00:00
|
|
|
DisplayOrder: arg.DisplayOrder,
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
q.workspaceApps = append(q.workspaceApps, workspaceApp)
|
|
|
|
return workspaceApp, nil
|
2022-01-29 23:38:32 +00:00
|
|
|
}
|
|
|
|
|
2023-08-16 12:22:00 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceAppStats(_ context.Context, arg database.InsertWorkspaceAppStatsParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
InsertWorkspaceAppStatsLoop:
|
|
|
|
for i := 0; i < len(arg.UserID); i++ {
|
|
|
|
stat := database.WorkspaceAppStat{
|
|
|
|
ID: q.workspaceAppStatsLastInsertID + 1,
|
|
|
|
UserID: arg.UserID[i],
|
|
|
|
WorkspaceID: arg.WorkspaceID[i],
|
|
|
|
AgentID: arg.AgentID[i],
|
|
|
|
AccessMethod: arg.AccessMethod[i],
|
|
|
|
SlugOrPort: arg.SlugOrPort[i],
|
|
|
|
SessionID: arg.SessionID[i],
|
|
|
|
SessionStartedAt: arg.SessionStartedAt[i],
|
|
|
|
SessionEndedAt: arg.SessionEndedAt[i],
|
|
|
|
Requests: arg.Requests[i],
|
|
|
|
}
|
|
|
|
for j, s := range q.workspaceAppStats {
|
|
|
|
// Check unique constraint for upsert.
|
|
|
|
if s.UserID == stat.UserID && s.AgentID == stat.AgentID && s.SessionID == stat.SessionID {
|
|
|
|
q.workspaceAppStats[j].SessionEndedAt = stat.SessionEndedAt
|
|
|
|
q.workspaceAppStats[j].Requests = stat.Requests
|
|
|
|
continue InsertWorkspaceAppStatsLoop
|
|
|
|
}
|
|
|
|
}
|
|
|
|
q.workspaceAppStats = append(q.workspaceAppStats, stat)
|
|
|
|
q.workspaceAppStatsLastInsertID++
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceBuild(_ context.Context, arg database.InsertWorkspaceBuildParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-25 13:14:38 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-08-26 09:28:38 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
workspaceBuild := database.WorkspaceBuildTable{
|
2023-07-13 17:12:29 +00:00
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
WorkspaceID: arg.WorkspaceID,
|
|
|
|
TemplateVersionID: arg.TemplateVersionID,
|
|
|
|
BuildNumber: arg.BuildNumber,
|
|
|
|
Transition: arg.Transition,
|
|
|
|
InitiatorID: arg.InitiatorID,
|
|
|
|
JobID: arg.JobID,
|
|
|
|
ProvisionerState: arg.ProvisionerState,
|
|
|
|
Deadline: arg.Deadline,
|
2023-09-14 08:09:51 +00:00
|
|
|
MaxDeadline: arg.MaxDeadline,
|
2023-07-13 17:12:29 +00:00
|
|
|
Reason: arg.Reason,
|
2022-08-26 09:28:38 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceBuilds = append(q.workspaceBuilds, workspaceBuild)
|
2023-07-25 13:14:38 +00:00
|
|
|
return nil
|
2022-08-26 09:28:38 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceBuildParameters(_ context.Context, arg database.InsertWorkspaceBuildParametersParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, name := range arg.Name {
|
|
|
|
q.workspaceBuildParameters = append(q.workspaceBuildParameters, database.WorkspaceBuildParameter{
|
|
|
|
WorkspaceBuildID: arg.WorkspaceBuildID,
|
|
|
|
Name: name,
|
|
|
|
Value: arg.Value[index],
|
|
|
|
})
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceProxy(_ context.Context, arg database.InsertWorkspaceProxyParams) (database.WorkspaceProxy, error) {
|
2022-04-07 09:03:35 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-26 16:21:04 +00:00
|
|
|
lastRegionID := int32(0)
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, p := range q.workspaceProxies {
|
|
|
|
if !p.Deleted && p.Name == arg.Name {
|
|
|
|
return database.WorkspaceProxy{}, errDuplicateKey
|
|
|
|
}
|
2023-07-26 16:21:04 +00:00
|
|
|
if p.RegionID > lastRegionID {
|
|
|
|
lastRegionID = p.RegionID
|
|
|
|
}
|
2022-04-07 09:03:35 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
p := database.WorkspaceProxy{
|
|
|
|
ID: arg.ID,
|
|
|
|
Name: arg.Name,
|
|
|
|
DisplayName: arg.DisplayName,
|
|
|
|
Icon: arg.Icon,
|
2023-07-26 16:21:04 +00:00
|
|
|
DerpEnabled: arg.DerpEnabled,
|
2023-08-02 14:35:06 +00:00
|
|
|
DerpOnly: arg.DerpOnly,
|
2023-07-13 17:12:29 +00:00
|
|
|
TokenHashedSecret: arg.TokenHashedSecret,
|
2023-07-26 16:21:04 +00:00
|
|
|
RegionID: lastRegionID + 1,
|
2023-07-13 17:12:29 +00:00
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Deleted: false,
|
|
|
|
}
|
|
|
|
q.workspaceProxies = append(q.workspaceProxies, p)
|
|
|
|
return p, nil
|
2022-04-07 09:03:35 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceResource(_ context.Context, arg database.InsertWorkspaceResourceParams) (database.WorkspaceResource, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.WorkspaceResource{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-04-07 09:03:35 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
//nolint:gosimple
|
|
|
|
resource := database.WorkspaceResource{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
JobID: arg.JobID,
|
|
|
|
Transition: arg.Transition,
|
|
|
|
Type: arg.Type,
|
|
|
|
Name: arg.Name,
|
|
|
|
Hide: arg.Hide,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
DailyCost: arg.DailyCost,
|
2022-04-07 09:03:35 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceResources = append(q.workspaceResources, resource)
|
|
|
|
return resource, nil
|
2022-04-07 09:03:35 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) InsertWorkspaceResourceMetadata(_ context.Context, arg database.InsertWorkspaceResourceMetadataParams) ([]database.WorkspaceResourceMetadatum, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-02 00:08:51 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
metadata := make([]database.WorkspaceResourceMetadatum, 0)
|
2023-06-12 22:40:58 +00:00
|
|
|
id := int64(1)
|
2023-07-13 17:12:29 +00:00
|
|
|
if len(q.workspaceResourceMetadata) > 0 {
|
|
|
|
id = q.workspaceResourceMetadata[len(q.workspaceResourceMetadata)-1].ID
|
2022-09-02 00:08:51 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, key := range arg.Key {
|
2023-06-12 22:40:58 +00:00
|
|
|
id++
|
2023-07-13 17:12:29 +00:00
|
|
|
value := arg.Value[index]
|
|
|
|
metadata = append(metadata, database.WorkspaceResourceMetadatum{
|
|
|
|
ID: id,
|
|
|
|
WorkspaceResourceID: arg.WorkspaceResourceID,
|
|
|
|
Key: key,
|
|
|
|
Value: sql.NullString{
|
|
|
|
String: value,
|
|
|
|
Valid: value != "",
|
|
|
|
},
|
|
|
|
Sensitive: arg.Sensitive[index],
|
2023-06-12 22:40:58 +00:00
|
|
|
})
|
2023-03-09 03:05:45 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceResourceMetadata = append(q.workspaceResourceMetadata, metadata...)
|
|
|
|
return metadata, nil
|
2023-03-09 03:05:45 +00:00
|
|
|
}
|
|
|
|
|
2024-02-13 14:31:20 +00:00
|
|
|
func (q *FakeQuerier) ListWorkspaceAgentPortShares(_ context.Context, workspaceID uuid.UUID) ([]database.WorkspaceAgentPortShare, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
shares := []database.WorkspaceAgentPortShare{}
|
|
|
|
for _, share := range q.workspaceAgentPortShares {
|
|
|
|
if share.WorkspaceID == workspaceID {
|
|
|
|
shares = append(shares, share)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return shares, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) RegisterWorkspaceProxy(_ context.Context, arg database.RegisterWorkspaceProxyParams) (database.WorkspaceProxy, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-03-13 19:16:54 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, p := range q.workspaceProxies {
|
|
|
|
if p.ID == arg.ID {
|
|
|
|
p.Url = arg.Url
|
|
|
|
p.WildcardHostname = arg.WildcardHostname
|
2023-07-26 16:21:04 +00:00
|
|
|
p.DerpEnabled = arg.DerpEnabled
|
2023-08-02 14:35:06 +00:00
|
|
|
p.DerpOnly = arg.DerpOnly
|
2023-11-21 11:21:25 +00:00
|
|
|
p.Version = arg.Version
|
2023-09-01 16:50:12 +00:00
|
|
|
p.UpdatedAt = dbtime.Now()
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceProxies[i] = p
|
|
|
|
return p, nil
|
|
|
|
}
|
2023-03-13 19:16:54 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-03-13 19:16:54 +00:00
|
|
|
|
2023-09-06 11:06:26 +00:00
|
|
|
func (q *FakeQuerier) RevokeDBCryptKey(_ context.Context, activeKeyDigest string) error {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i := range q.dbcryptKeys {
|
|
|
|
key := q.dbcryptKeys[i]
|
|
|
|
|
|
|
|
// Is the key already revoked?
|
|
|
|
if !key.ActiveKeyDigest.Valid {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if key.ActiveKeyDigest.String != activeKeyDigest {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check for foreign key constraints.
|
|
|
|
for _, ul := range q.userLinks {
|
|
|
|
if (ul.OAuthAccessTokenKeyID.Valid && ul.OAuthAccessTokenKeyID.String == activeKeyDigest) ||
|
|
|
|
(ul.OAuthRefreshTokenKeyID.Valid && ul.OAuthRefreshTokenKeyID.String == activeKeyDigest) {
|
|
|
|
return errForeignKeyConstraint
|
|
|
|
}
|
|
|
|
}
|
2023-09-29 19:13:20 +00:00
|
|
|
for _, gal := range q.externalAuthLinks {
|
2023-09-06 11:06:26 +00:00
|
|
|
if (gal.OAuthAccessTokenKeyID.Valid && gal.OAuthAccessTokenKeyID.String == activeKeyDigest) ||
|
|
|
|
(gal.OAuthRefreshTokenKeyID.Valid && gal.OAuthRefreshTokenKeyID.String == activeKeyDigest) {
|
|
|
|
return errForeignKeyConstraint
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Revoke the key.
|
|
|
|
q.dbcryptKeys[i].RevokedAt = sql.NullTime{Time: dbtime.Now(), Valid: true}
|
|
|
|
q.dbcryptKeys[i].RevokedKeyDigest = sql.NullString{String: key.ActiveKeyDigest.String, Valid: true}
|
|
|
|
q.dbcryptKeys[i].ActiveKeyDigest = sql.NullString{}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (*FakeQuerier) TryAcquireLock(_ context.Context, _ int64) (bool, error) {
|
|
|
|
return false, xerrors.New("TryAcquireLock must only be called within a transaction")
|
|
|
|
}
|
|
|
|
|
2023-10-10 15:52:42 +00:00
|
|
|
func (q *FakeQuerier) UnarchiveTemplateVersion(_ context.Context, arg database.UnarchiveTemplateVersionParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2023-10-11 14:26:22 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-10-10 15:52:42 +00:00
|
|
|
|
|
|
|
for i, v := range q.data.templateVersions {
|
|
|
|
if v.ID == arg.TemplateVersionID {
|
|
|
|
v.Archived = false
|
|
|
|
v.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.data.templateVersions[i] = v
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2024-01-24 13:39:19 +00:00
|
|
|
func (q *FakeQuerier) UnfavoriteWorkspace(_ context.Context, arg uuid.UUID) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i := 0; i < len(q.workspaces); i++ {
|
|
|
|
if q.workspaces[i].ID != arg {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.workspaces[i].Favorite = false
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateAPIKeyByID(_ context.Context, arg database.UpdateAPIKeyByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
2023-03-13 19:16:54 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-03-13 19:16:54 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, apiKey := range q.apiKeys {
|
|
|
|
if apiKey.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
apiKey.LastUsed = arg.LastUsed
|
|
|
|
apiKey.ExpiresAt = arg.ExpiresAt
|
|
|
|
apiKey.IPAddress = arg.IPAddress
|
|
|
|
q.apiKeys[index] = apiKey
|
|
|
|
return nil
|
2023-03-13 19:16:54 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-03-13 19:16:54 +00:00
|
|
|
|
2023-09-29 19:13:20 +00:00
|
|
|
func (q *FakeQuerier) UpdateExternalAuthLink(_ context.Context, arg database.UpdateExternalAuthLinkParams) (database.ExternalAuthLink, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-09-29 19:13:20 +00:00
|
|
|
return database.ExternalAuthLink{}, err
|
2023-07-11 10:11:08 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-09-29 19:13:20 +00:00
|
|
|
for index, gitAuthLink := range q.externalAuthLinks {
|
2023-07-13 17:12:29 +00:00
|
|
|
if gitAuthLink.ProviderID != arg.ProviderID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if gitAuthLink.UserID != arg.UserID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
gitAuthLink.UpdatedAt = arg.UpdatedAt
|
|
|
|
gitAuthLink.OAuthAccessToken = arg.OAuthAccessToken
|
2023-09-06 11:06:26 +00:00
|
|
|
gitAuthLink.OAuthAccessTokenKeyID = arg.OAuthAccessTokenKeyID
|
2023-07-13 17:12:29 +00:00
|
|
|
gitAuthLink.OAuthRefreshToken = arg.OAuthRefreshToken
|
2023-09-06 11:06:26 +00:00
|
|
|
gitAuthLink.OAuthRefreshTokenKeyID = arg.OAuthRefreshTokenKeyID
|
2023-07-13 17:12:29 +00:00
|
|
|
gitAuthLink.OAuthExpiry = arg.OAuthExpiry
|
2023-10-09 23:49:30 +00:00
|
|
|
gitAuthLink.OAuthExtra = arg.OAuthExtra
|
2023-09-29 19:13:20 +00:00
|
|
|
q.externalAuthLinks[index] = gitAuthLink
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return gitAuthLink, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-09-29 19:13:20 +00:00
|
|
|
return database.ExternalAuthLink{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateGitSSHKey(_ context.Context, arg database.UpdateGitSSHKeyParams) (database.GitSSHKey, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.GitSSHKey{}, err
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, key := range q.gitSSHKey {
|
|
|
|
if key.UserID != arg.UserID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
key.UpdatedAt = arg.UpdatedAt
|
|
|
|
key.PrivateKey = arg.PrivateKey
|
|
|
|
key.PublicKey = arg.PublicKey
|
|
|
|
q.gitSSHKey[index] = key
|
|
|
|
return key, nil
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.GitSSHKey{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateGroupByID(_ context.Context, arg database.UpdateGroupByIDParams) (database.Group, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Group{}, err
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, group := range q.groups {
|
|
|
|
if group.ID == arg.ID {
|
2023-08-02 15:53:06 +00:00
|
|
|
group.DisplayName = arg.DisplayName
|
2023-07-13 17:12:29 +00:00
|
|
|
group.Name = arg.Name
|
|
|
|
group.AvatarURL = arg.AvatarURL
|
|
|
|
group.QuotaAllowance = arg.QuotaAllowance
|
|
|
|
q.groups[i] = group
|
|
|
|
return group, nil
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Group{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-08-02 14:31:25 +00:00
|
|
|
func (q *FakeQuerier) UpdateInactiveUsersToDormant(_ context.Context, params database.UpdateInactiveUsersToDormantParams) ([]database.UpdateInactiveUsersToDormantRow, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
var updated []database.UpdateInactiveUsersToDormantRow
|
|
|
|
for index, user := range q.users {
|
|
|
|
if user.Status == database.UserStatusActive && user.LastSeenAt.Before(params.LastSeenAfter) {
|
|
|
|
q.users[index].Status = database.UserStatusDormant
|
|
|
|
q.users[index].UpdatedAt = params.UpdatedAt
|
|
|
|
updated = append(updated, database.UpdateInactiveUsersToDormantRow{
|
|
|
|
ID: user.ID,
|
|
|
|
Email: user.Email,
|
|
|
|
LastSeenAt: user.LastSeenAt,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(updated) == 0 {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
return updated, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateMemberRoles(_ context.Context, arg database.UpdateMemberRolesParams) (database.OrganizationMember, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.OrganizationMember{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, mem := range q.organizationMembers {
|
|
|
|
if mem.UserID == arg.UserID && mem.OrganizationID == arg.OrgID {
|
|
|
|
uniqueRoles := make([]string, 0, len(arg.GrantedRoles))
|
|
|
|
exist := make(map[string]struct{})
|
|
|
|
for _, r := range arg.GrantedRoles {
|
|
|
|
if _, ok := exist[r]; ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
exist[r] = struct{}{}
|
|
|
|
uniqueRoles = append(uniqueRoles, r)
|
|
|
|
}
|
|
|
|
sort.Strings(uniqueRoles)
|
|
|
|
|
|
|
|
mem.Roles = uniqueRoles
|
|
|
|
q.organizationMembers[i] = mem
|
|
|
|
return mem, nil
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.OrganizationMember{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-12-21 21:38:42 +00:00
|
|
|
func (q *FakeQuerier) UpdateOAuth2ProviderAppByID(_ context.Context, arg database.UpdateOAuth2ProviderAppByIDParams) (database.OAuth2ProviderApp, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.OAuth2ProviderApp{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, app := range q.oauth2ProviderApps {
|
|
|
|
if app.Name == arg.Name && app.ID != arg.ID {
|
|
|
|
return database.OAuth2ProviderApp{}, errDuplicateKey
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for index, app := range q.oauth2ProviderApps {
|
|
|
|
if app.ID == arg.ID {
|
|
|
|
newApp := database.OAuth2ProviderApp{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: app.CreatedAt,
|
|
|
|
UpdatedAt: arg.UpdatedAt,
|
|
|
|
Name: arg.Name,
|
|
|
|
Icon: arg.Icon,
|
|
|
|
CallbackURL: arg.CallbackURL,
|
|
|
|
}
|
|
|
|
q.oauth2ProviderApps[index] = newApp
|
|
|
|
return newApp, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.OAuth2ProviderApp{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) UpdateOAuth2ProviderAppSecretByID(_ context.Context, arg database.UpdateOAuth2ProviderAppSecretByIDParams) (database.OAuth2ProviderAppSecret, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.OAuth2ProviderAppSecret{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, secret := range q.oauth2ProviderAppSecrets {
|
|
|
|
if secret.ID == arg.ID {
|
|
|
|
newSecret := database.OAuth2ProviderAppSecret{
|
|
|
|
ID: arg.ID,
|
|
|
|
CreatedAt: secret.CreatedAt,
|
|
|
|
HashedSecret: secret.HashedSecret,
|
|
|
|
DisplaySecret: secret.DisplaySecret,
|
|
|
|
AppID: secret.AppID,
|
|
|
|
LastUsedAt: arg.LastUsedAt,
|
|
|
|
}
|
|
|
|
q.oauth2ProviderAppSecrets[index] = newSecret
|
|
|
|
return newSecret, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return database.OAuth2ProviderAppSecret{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-12-18 16:44:52 +00:00
|
|
|
func (q *FakeQuerier) UpdateProvisionerDaemonLastSeenAt(_ context.Context, arg database.UpdateProvisionerDaemonLastSeenAtParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for idx := range q.provisionerDaemons {
|
|
|
|
if q.provisionerDaemons[idx].ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if q.provisionerDaemons[idx].LastSeenAt.Time.After(arg.LastSeenAt.Time) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.provisionerDaemons[idx].LastSeenAt = arg.LastSeenAt
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateProvisionerJobByID(_ context.Context, arg database.UpdateProvisionerJobByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, job := range q.provisionerJobs {
|
|
|
|
if arg.ID != job.ID {
|
|
|
|
continue
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
job.UpdatedAt = arg.UpdatedAt
|
2023-10-05 01:57:46 +00:00
|
|
|
job.JobStatus = provisonerJobStatus(job)
|
2023-07-13 17:12:29 +00:00
|
|
|
q.provisionerJobs[index] = job
|
|
|
|
return nil
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2023-04-14 14:14:52 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateProvisionerJobWithCancelByID(_ context.Context, arg database.UpdateProvisionerJobWithCancelByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
2023-04-14 14:14:52 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-04-04 12:48:35 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, job := range q.provisionerJobs {
|
|
|
|
if arg.ID != job.ID {
|
|
|
|
continue
|
2023-04-04 12:48:35 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
job.CanceledAt = arg.CanceledAt
|
|
|
|
job.CompletedAt = arg.CompletedAt
|
2023-10-05 01:57:46 +00:00
|
|
|
job.JobStatus = provisonerJobStatus(job)
|
2023-07-13 17:12:29 +00:00
|
|
|
q.provisionerJobs[index] = job
|
|
|
|
return nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2023-04-04 12:48:35 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateProvisionerJobWithCompleteByID(_ context.Context, arg database.UpdateProvisionerJobWithCompleteByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
2023-04-04 12:48:35 +00:00
|
|
|
}
|
|
|
|
|
2023-03-07 14:14:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, job := range q.provisionerJobs {
|
|
|
|
if arg.ID != job.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
job.UpdatedAt = arg.UpdatedAt
|
|
|
|
job.CompletedAt = arg.CompletedAt
|
|
|
|
job.Error = arg.Error
|
|
|
|
job.ErrorCode = arg.ErrorCode
|
2023-10-05 01:57:46 +00:00
|
|
|
job.JobStatus = provisonerJobStatus(job)
|
2023-07-13 17:12:29 +00:00
|
|
|
q.provisionerJobs[index] = job
|
|
|
|
return nil
|
2023-03-07 14:14:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2023-03-07 14:14:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateReplica(_ context.Context, arg database.UpdateReplicaParams) (database.Replica, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Replica{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/error.go#L15) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcstream/stream.go#L253)
- [`writeFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L65)
- [`AppendFrame`](https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/packet.go#L128)
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: https://github.com/storj/drpc/blob/f6e369438f636b47ee788095d3fc13062ffbd019/drpcwire/writer.go#L73
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](https://github.com/nhooyr/websocket/blob/8dee580a7f74cf1713400307b4eee514b927870f/write.go#L180), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](https://github.com/klauspost/compress/blob/a1a9cfc821f00faf2f5231beaa96244344d50391/flate/stateless.go#L94), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, replica := range q.replicas {
|
|
|
|
if replica.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
replica.Hostname = arg.Hostname
|
|
|
|
replica.StartedAt = arg.StartedAt
|
|
|
|
replica.StoppedAt = arg.StoppedAt
|
|
|
|
replica.UpdatedAt = arg.UpdatedAt
|
|
|
|
replica.RelayAddress = arg.RelayAddress
|
|
|
|
replica.RegionID = arg.RegionID
|
|
|
|
replica.Version = arg.Version
|
|
|
|
replica.Error = arg.Error
|
|
|
|
replica.DatabaseLatency = arg.DatabaseLatency
|
2023-07-26 16:21:04 +00:00
|
|
|
replica.Primary = arg.Primary
|
2023-07-13 17:12:29 +00:00
|
|
|
q.replicas[index] = replica
|
|
|
|
return replica, nil
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Replica{}, sql.ErrNoRows
|
2022-01-25 19:52:58 +00:00
|
|
|
}
|
2022-11-24 12:22:20 +00:00
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateACLByID(_ context.Context, arg database.UpdateTemplateACLByIDParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-19 20:07:33 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-11-14 17:57:33 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, template := range q.templates {
|
|
|
|
if template.ID == arg.ID {
|
|
|
|
template.GroupACL = arg.GroupACL
|
|
|
|
template.UserACL = arg.UserACL
|
|
|
|
|
|
|
|
q.templates[i] = template
|
2023-07-19 20:07:33 +00:00
|
|
|
return nil
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
2022-11-14 17:57:33 +00:00
|
|
|
}
|
2022-03-22 19:17:50 +00:00
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
return sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-10-18 22:07:21 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateAccessControlByID(_ context.Context, arg database.UpdateTemplateAccessControlByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for idx, tpl := range q.templates {
|
|
|
|
if tpl.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
q.templates[idx].RequireActiveVersion = arg.RequireActiveVersion
|
2023-11-20 19:16:18 +00:00
|
|
|
q.templates[idx].Deprecated = arg.Deprecated
|
2023-10-18 22:07:21 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateActiveVersionByID(_ context.Context, arg database.UpdateTemplateActiveVersionByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2022-03-22 19:17:50 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, template := range q.templates {
|
|
|
|
if template.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
template.ActiveVersionID = arg.ActiveVersionID
|
|
|
|
template.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.templates[index] = template
|
|
|
|
return nil
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2022-03-22 19:17:50 +00:00
|
|
|
}
|
2022-04-06 00:18:26 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateDeletedByID(_ context.Context, arg database.UpdateTemplateDeletedByIDParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-04-06 00:18:26 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, template := range q.templates {
|
|
|
|
if template.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
template.Deleted = arg.Deleted
|
|
|
|
template.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.templates[index] = template
|
|
|
|
return nil
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateMetaByID(_ context.Context, arg database.UpdateTemplateMetaByIDParams) error {
|
2023-07-13 17:12:29 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-19 20:07:33 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-04-06 00:18:26 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for idx, tpl := range q.templates {
|
|
|
|
if tpl.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
2023-09-01 16:50:12 +00:00
|
|
|
tpl.UpdatedAt = dbtime.Now()
|
2023-07-13 17:12:29 +00:00
|
|
|
tpl.Name = arg.Name
|
|
|
|
tpl.DisplayName = arg.DisplayName
|
|
|
|
tpl.Description = arg.Description
|
|
|
|
tpl.Icon = arg.Icon
|
2024-01-05 21:04:14 +00:00
|
|
|
tpl.GroupACL = arg.GroupACL
|
2024-01-11 22:18:46 +00:00
|
|
|
tpl.AllowUserCancelWorkspaceJobs = arg.AllowUserCancelWorkspaceJobs
|
2024-02-13 14:31:20 +00:00
|
|
|
tpl.MaxPortSharingLevel = arg.MaxPortSharingLevel
|
2023-07-13 17:12:29 +00:00
|
|
|
q.templates[idx] = tpl
|
2023-07-19 20:07:33 +00:00
|
|
|
return nil
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
return sql.ErrNoRows
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateScheduleByID(_ context.Context, arg database.UpdateTemplateScheduleByIDParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-19 20:07:33 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-10-10 20:37:06 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for idx, tpl := range q.templates {
|
|
|
|
if tpl.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
tpl.AllowUserAutostart = arg.AllowUserAutostart
|
|
|
|
tpl.AllowUserAutostop = arg.AllowUserAutostop
|
2023-09-01 16:50:12 +00:00
|
|
|
tpl.UpdatedAt = dbtime.Now()
|
2023-07-13 17:12:29 +00:00
|
|
|
tpl.DefaultTTL = arg.DefaultTTL
|
2024-02-13 07:00:35 +00:00
|
|
|
tpl.ActivityBump = arg.ActivityBump
|
2023-12-15 08:27:56 +00:00
|
|
|
tpl.UseMaxTtl = arg.UseMaxTtl
|
2023-07-13 17:12:29 +00:00
|
|
|
tpl.MaxTTL = arg.MaxTTL
|
2023-08-29 18:35:05 +00:00
|
|
|
tpl.AutostopRequirementDaysOfWeek = arg.AutostopRequirementDaysOfWeek
|
|
|
|
tpl.AutostopRequirementWeeks = arg.AutostopRequirementWeeks
|
2023-10-13 16:57:18 +00:00
|
|
|
tpl.AutostartBlockDaysOfWeek = arg.AutostartBlockDaysOfWeek
|
2023-07-13 17:12:29 +00:00
|
|
|
tpl.FailureTTL = arg.FailureTTL
|
2023-08-24 18:25:54 +00:00
|
|
|
tpl.TimeTilDormant = arg.TimeTilDormant
|
|
|
|
tpl.TimeTilDormantAutoDelete = arg.TimeTilDormantAutoDelete
|
2023-07-13 17:12:29 +00:00
|
|
|
q.templates[idx] = tpl
|
2023-07-19 20:07:33 +00:00
|
|
|
return nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
return sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateVersionByID(_ context.Context, arg database.UpdateTemplateVersionByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-25 13:14:38 +00:00
|
|
|
return err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2022-10-10 20:37:06 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
templateVersion.TemplateID = arg.TemplateID
|
|
|
|
templateVersion.UpdatedAt = arg.UpdatedAt
|
|
|
|
templateVersion.Name = arg.Name
|
|
|
|
templateVersion.Message = arg.Message
|
|
|
|
q.templateVersions[index] = templateVersion
|
2023-07-25 13:14:38 +00:00
|
|
|
return nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateVersionDescriptionByJobID(_ context.Context, arg database.UpdateTemplateVersionDescriptionByJobIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2023-02-02 19:53:48 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.JobID != arg.JobID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
templateVersion.Readme = arg.Readme
|
|
|
|
templateVersion.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.templateVersions[index] = templateVersion
|
|
|
|
return nil
|
2023-02-02 19:53:48 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2023-02-02 19:53:48 +00:00
|
|
|
}
|
|
|
|
|
2023-09-29 19:13:20 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateVersionExternalAuthProvidersByJobID(_ context.Context, arg database.UpdateTemplateVersionExternalAuthProvidersByJobIDParams) error {
|
2023-07-13 17:12:29 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2023-02-02 19:53:48 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, templateVersion := range q.templateVersions {
|
|
|
|
if templateVersion.JobID != arg.JobID {
|
|
|
|
continue
|
2023-02-02 19:53:48 +00:00
|
|
|
}
|
2023-09-29 19:13:20 +00:00
|
|
|
templateVersion.ExternalAuthProviders = arg.ExternalAuthProviders
|
2023-07-13 17:12:29 +00:00
|
|
|
templateVersion.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.templateVersions[index] = templateVersion
|
|
|
|
return nil
|
2023-02-02 19:53:48 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
2023-02-02 19:53:48 +00:00
|
|
|
|
2023-08-22 20:15:13 +00:00
|
|
|
func (q *FakeQuerier) UpdateTemplateWorkspacesLastUsedAt(_ context.Context, arg database.UpdateTemplateWorkspacesLastUsedAtParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, ws := range q.workspaces {
|
|
|
|
if ws.TemplateID != arg.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
ws.LastUsedAt = arg.LastUsedAt
|
|
|
|
q.workspaces[i] = ws
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-12-14 17:38:44 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserAppearanceSettings(_ context.Context, arg database.UpdateUserAppearanceSettingsParams) (database.User, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.User{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
user.ThemePreference = arg.ThemePreference
|
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
|
|
|
}
|
|
|
|
return database.User{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserDeletedByID(_ context.Context, params database.UpdateUserDeletedByIDParams) error {
|
|
|
|
if err := validateDatabaseType(params); err != nil {
|
|
|
|
return err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, u := range q.users {
|
|
|
|
if u.ID == params.ID {
|
|
|
|
u.Deleted = params.Deleted
|
|
|
|
q.users[i] = u
|
|
|
|
// NOTE: In the real world, this is done by a trigger.
|
|
|
|
i := 0
|
|
|
|
for {
|
|
|
|
if i >= len(q.apiKeys) {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
k := q.apiKeys[i]
|
|
|
|
if k.UserID == u.ID {
|
|
|
|
q.apiKeys[i] = q.apiKeys[len(q.apiKeys)-1]
|
|
|
|
q.apiKeys = q.apiKeys[:len(q.apiKeys)-1]
|
|
|
|
// We removed an element, so decrement
|
|
|
|
i--
|
|
|
|
}
|
|
|
|
i++
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
2023-02-02 19:53:48 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserHashedPassword(_ context.Context, arg database.UpdateUserHashedPasswordParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2022-04-06 00:18:26 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
user.HashedPassword = arg.HashedPassword
|
|
|
|
q.users[i] = user
|
|
|
|
return nil
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2022-04-06 00:18:26 +00:00
|
|
|
}
|
2022-05-02 19:30:46 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserLastSeenAt(_ context.Context, arg database.UpdateUserLastSeenAtParams) (database.User, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-05-02 19:30:46 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
user.LastSeenAt = arg.LastSeenAt
|
|
|
|
user.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
2022-05-02 19:30:46 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) UpdateUserLink(_ context.Context, params database.UpdateUserLinkParams) (database.UserLink, error) {
|
|
|
|
if err := validateDatabaseType(params); err != nil {
|
|
|
|
return database.UserLink{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, link := range q.userLinks {
|
|
|
|
if link.UserID == params.UserID && link.LoginType == params.LoginType {
|
|
|
|
link.OAuthAccessToken = params.OAuthAccessToken
|
2023-09-06 11:06:26 +00:00
|
|
|
link.OAuthAccessTokenKeyID = params.OAuthAccessTokenKeyID
|
2023-07-13 17:12:29 +00:00
|
|
|
link.OAuthRefreshToken = params.OAuthRefreshToken
|
2023-09-06 11:06:26 +00:00
|
|
|
link.OAuthRefreshTokenKeyID = params.OAuthRefreshTokenKeyID
|
2023-07-13 17:12:29 +00:00
|
|
|
link.OAuthExpiry = params.OAuthExpiry
|
2023-11-27 16:47:23 +00:00
|
|
|
link.DebugContext = params.DebugContext
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
q.userLinks[i] = link
|
|
|
|
return link, nil
|
|
|
|
}
|
2022-09-19 17:08:25 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
return database.UserLink{}, sql.ErrNoRows
|
2022-09-07 16:38:19 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserLinkedID(_ context.Context, params database.UpdateUserLinkedIDParams) (database.UserLink, error) {
|
|
|
|
if err := validateDatabaseType(params); err != nil {
|
|
|
|
return database.UserLink{}, err
|
|
|
|
}
|
|
|
|
|
2022-05-02 19:30:46 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, link := range q.userLinks {
|
|
|
|
if link.UserID == params.UserID && link.LoginType == params.LoginType {
|
|
|
|
link.LinkedID = params.LinkedID
|
|
|
|
|
|
|
|
q.userLinks[i] = link
|
|
|
|
return link, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
}
|
2022-05-02 19:30:46 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.UserLink{}, sql.ErrNoRows
|
2022-05-02 19:30:46 +00:00
|
|
|
}
|
2022-06-17 05:26:40 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserLoginType(_ context.Context, arg database.UpdateUserLoginTypeParams) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2022-06-17 05:26:40 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, u := range q.users {
|
|
|
|
if u.ID == arg.UserID {
|
|
|
|
u.LoginType = arg.NewLoginType
|
|
|
|
if arg.NewLoginType != database.LoginTypePassword {
|
|
|
|
u.HashedPassword = []byte{}
|
|
|
|
}
|
|
|
|
q.users[i] = u
|
|
|
|
return u, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-06-17 05:26:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserProfile(_ context.Context, arg database.UpdateUserProfileParams) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-08-17 23:00:53 +00:00
|
|
|
|
2022-10-17 13:43:30 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
user.Email = arg.Email
|
|
|
|
user.Username = arg.Username
|
|
|
|
user.AvatarURL = arg.AvatarURL
|
2024-01-17 12:20:45 +00:00
|
|
|
user.Name = arg.Name
|
2023-07-13 17:12:29 +00:00
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-10-17 13:43:30 +00:00
|
|
|
}
|
|
|
|
|
2023-07-20 13:35:41 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserQuietHoursSchedule(_ context.Context, arg database.UpdateUserQuietHoursScheduleParams) (database.User, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return database.User{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
user.QuietHoursSchedule = arg.QuietHoursSchedule
|
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
|
|
|
}
|
|
|
|
return database.User{}, sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserRoles(_ context.Context, arg database.UpdateUserRolesParams) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-10-17 13:43:30 +00:00
|
|
|
|
2023-04-20 11:53:34 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-12-01 17:43:28 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
// Set new roles
|
|
|
|
user.RBACRoles = arg.GrantedRoles
|
|
|
|
// Remove duplicates and sort
|
|
|
|
uniqueRoles := make([]string, 0, len(user.RBACRoles))
|
|
|
|
exist := make(map[string]struct{})
|
|
|
|
for _, r := range user.RBACRoles {
|
|
|
|
if _, ok := exist[r]; ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
exist[r] = struct{}{}
|
|
|
|
uniqueRoles = append(uniqueRoles, r)
|
|
|
|
}
|
|
|
|
sort.Strings(uniqueRoles)
|
|
|
|
user.RBACRoles = uniqueRoles
|
|
|
|
|
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
2022-12-01 17:43:28 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-12-01 17:43:28 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateUserStatus(_ context.Context, arg database.UpdateUserStatusParams) (database.User, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-12-06 18:38:38 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-12-06 18:38:38 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, user := range q.users {
|
|
|
|
if user.ID != arg.ID {
|
|
|
|
continue
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
user.Status = arg.Status
|
|
|
|
user.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.users[index] = user
|
|
|
|
return user, nil
|
2022-12-06 18:38:38 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.User{}, sql.ErrNoRows
|
2022-12-06 18:38:38 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspace(_ context.Context, arg database.UpdateWorkspaceParams) (database.Workspace, error) {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Workspace{}, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-01-04 21:31:45 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-01-04 21:31:45 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, workspace := range q.workspaces {
|
|
|
|
if workspace.Deleted || workspace.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
for _, other := range q.workspaces {
|
|
|
|
if other.Deleted || other.ID == workspace.ID || workspace.OwnerID != other.OwnerID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if other.Name == arg.Name {
|
|
|
|
return database.Workspace{}, errDuplicateKey
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
workspace.Name = arg.Name
|
|
|
|
q.workspaces[i] = workspace
|
|
|
|
|
|
|
|
return workspace, nil
|
2023-01-04 21:31:45 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
2023-01-04 21:31:45 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAgentConnectionByID(_ context.Context, arg database.UpdateWorkspaceAgentConnectionByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2023-03-07 19:38:11 +00:00
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
agent.FirstConnectedAt = arg.FirstConnectedAt
|
|
|
|
agent.LastConnectedAt = arg.LastConnectedAt
|
|
|
|
agent.DisconnectedAt = arg.DisconnectedAt
|
|
|
|
agent.UpdatedAt = arg.UpdatedAt
|
2024-01-04 19:18:54 +00:00
|
|
|
agent.LastConnectedReplicaID = arg.LastConnectedReplicaID
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgents[index] = agent
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
2023-03-07 19:38:11 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAgentLifecycleStateByID(_ context.Context, arg database.UpdateWorkspaceAgentLifecycleStateByIDParams) error {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2022-08-24 18:44:22 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID == arg.ID {
|
|
|
|
agent.LifecycleState = arg.LifecycleState
|
|
|
|
agent.StartedAt = arg.StartedAt
|
|
|
|
agent.ReadyAt = arg.ReadyAt
|
|
|
|
q.workspaceAgents[i] = agent
|
|
|
|
return nil
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
2023-02-14 14:27:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-28 15:57:23 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAgentLogOverflowByID(_ context.Context, arg database.UpdateWorkspaceAgentLogOverflowByIDParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
for i, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID == arg.ID {
|
|
|
|
agent.LogsOverflowed = arg.LogsOverflowed
|
|
|
|
q.workspaceAgents[i] = agent
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAgentMetadata(_ context.Context, arg database.UpdateWorkspaceAgentMetadataParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-08-29 23:45:40 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, m := range q.workspaceAgentMetadata {
|
2023-10-13 13:37:55 +00:00
|
|
|
if m.WorkspaceAgentID != arg.WorkspaceAgentID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
for j := 0; j < len(arg.Key); j++ {
|
|
|
|
if m.Key == arg.Key[j] {
|
|
|
|
q.workspaceAgentMetadata[i].Value = arg.Value[j]
|
|
|
|
q.workspaceAgentMetadata[i].Error = arg.Error[j]
|
|
|
|
q.workspaceAgentMetadata[i].CollectedAt = arg.CollectedAt[j]
|
|
|
|
return nil
|
|
|
|
}
|
2022-08-29 23:45:40 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
return nil
|
2022-08-29 23:45:40 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAgentStartupByID(_ context.Context, arg database.UpdateWorkspaceAgentStartupByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-08-09 05:10:28 +00:00
|
|
|
if len(arg.Subsystems) > 0 {
|
|
|
|
seen := map[database.WorkspaceAgentSubsystem]struct{}{
|
|
|
|
arg.Subsystems[0]: {},
|
|
|
|
}
|
|
|
|
for i := 1; i < len(arg.Subsystems); i++ {
|
|
|
|
s := arg.Subsystems[i]
|
|
|
|
if _, ok := seen[s]; ok {
|
|
|
|
return xerrors.Errorf("duplicate subsystem %q", s)
|
|
|
|
}
|
|
|
|
seen[s] = struct{}{}
|
|
|
|
|
|
|
|
if arg.Subsystems[i-1] > arg.Subsystems[i] {
|
|
|
|
return xerrors.Errorf("subsystems not sorted: %q > %q", arg.Subsystems[i-1], arg.Subsystems[i])
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-08-25 21:04:31 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, agent := range q.workspaceAgents {
|
|
|
|
if agent.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-08-25 21:04:31 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
agent.Version = arg.Version
|
2023-10-31 06:08:43 +00:00
|
|
|
agent.APIVersion = arg.APIVersion
|
2023-07-13 17:12:29 +00:00
|
|
|
agent.ExpandedDirectory = arg.ExpandedDirectory
|
2023-08-09 05:10:28 +00:00
|
|
|
agent.Subsystems = arg.Subsystems
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceAgents[index] = agent
|
|
|
|
return nil
|
2022-08-25 21:04:31 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2022-08-25 21:04:31 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAppHealthByID(_ context.Context, arg database.UpdateWorkspaceAppHealthByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-08-17 23:00:53 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, app := range q.workspaceApps {
|
|
|
|
if app.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
app.Health = arg.Health
|
|
|
|
q.workspaceApps[index] = app
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
|
|
|
|
2023-10-06 09:27:12 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAutomaticUpdates(_ context.Context, arg database.UpdateWorkspaceAutomaticUpdatesParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
workspace.AutomaticUpdates = arg.AutomaticUpdates
|
|
|
|
q.workspaces[index] = workspace
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceAutostart(_ context.Context, arg database.UpdateWorkspaceAutostartParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2023-04-20 11:53:34 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-08-17 23:00:53 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
workspace.AutostartSchedule = arg.AutostartSchedule
|
|
|
|
q.workspaces[index] = workspace
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
|
|
|
|
2023-09-22 15:22:07 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceBuildCostByID(_ context.Context, arg database.UpdateWorkspaceBuildCostByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-25 13:14:38 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-04-20 11:53:34 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-08-17 23:00:53 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, workspaceBuild := range q.workspaceBuilds {
|
|
|
|
if workspaceBuild.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-09-22 15:22:07 +00:00
|
|
|
workspaceBuild.DailyCost = arg.DailyCost
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceBuilds[index] = workspaceBuild
|
2023-07-25 13:14:38 +00:00
|
|
|
return nil
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-07-25 13:14:38 +00:00
|
|
|
return sql.ErrNoRows
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
|
|
|
|
2023-09-22 15:22:07 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceBuildDeadlineByID(_ context.Context, arg database.UpdateWorkspaceBuildDeadlineByIDParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
2023-07-25 13:14:38 +00:00
|
|
|
return err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-04-20 11:53:34 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-08-17 23:00:53 +00:00
|
|
|
|
2023-09-22 15:22:07 +00:00
|
|
|
for idx, build := range q.workspaceBuilds {
|
|
|
|
if build.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-09-22 15:22:07 +00:00
|
|
|
build.Deadline = arg.Deadline
|
|
|
|
build.MaxDeadline = arg.MaxDeadline
|
|
|
|
build.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.workspaceBuilds[idx] = build
|
2023-07-25 13:14:38 +00:00
|
|
|
return nil
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2023-09-22 15:22:07 +00:00
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
|
|
|
func (q *FakeQuerier) UpdateWorkspaceBuildProvisionerStateByID(_ context.Context, arg database.UpdateWorkspaceBuildProvisionerStateByIDParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for idx, build := range q.workspaceBuilds {
|
|
|
|
if build.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
build.ProvisionerState = arg.ProvisionerState
|
|
|
|
build.UpdatedAt = arg.UpdatedAt
|
|
|
|
q.workspaceBuilds[idx] = build
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-25 13:14:38 +00:00
|
|
|
return sql.ErrNoRows
|
2022-08-17 23:00:53 +00:00
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceDeletedByID(_ context.Context, arg database.UpdateWorkspaceDeletedByIDParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-04-20 11:53:34 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
workspace.Deleted = arg.Deleted
|
|
|
|
q.workspaces[index] = workspace
|
|
|
|
return nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-08-24 18:25:54 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceDormantDeletingAt(_ context.Context, arg database.UpdateWorkspaceDormantDeletingAtParams) (database.Workspace, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-08-04 00:46:02 +00:00
|
|
|
return database.Workspace{}, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
2023-04-20 11:53:34 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
workspace.DormantAt = arg.DormantAt
|
|
|
|
if workspace.DormantAt.Time.IsZero() {
|
2023-09-01 16:50:12 +00:00
|
|
|
workspace.LastUsedAt = dbtime.Now()
|
2023-07-21 03:01:11 +00:00
|
|
|
workspace.DeletingAt = sql.NullTime{}
|
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
if !workspace.DormantAt.Time.IsZero() {
|
2023-07-21 03:01:11 +00:00
|
|
|
var template database.TemplateTable
|
|
|
|
for _, t := range q.templates {
|
|
|
|
if t.ID == workspace.TemplateID {
|
|
|
|
template = t
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if template.ID == uuid.Nil {
|
2023-08-04 00:46:02 +00:00
|
|
|
return database.Workspace{}, xerrors.Errorf("unable to find workspace template")
|
2023-07-21 03:01:11 +00:00
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
if template.TimeTilDormantAutoDelete > 0 {
|
2023-07-21 03:01:11 +00:00
|
|
|
workspace.DeletingAt = sql.NullTime{
|
|
|
|
Valid: true,
|
2023-08-24 18:25:54 +00:00
|
|
|
Time: workspace.DormantAt.Time.Add(time.Duration(template.TimeTilDormantAutoDelete)),
|
2023-07-21 03:01:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaces[index] = workspace
|
2023-08-04 00:46:02 +00:00
|
|
|
return workspace, nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-08-04 00:46:02 +00:00
|
|
|
return database.Workspace{}, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-08-24 18:25:54 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceLastUsedAt(_ context.Context, arg database.UpdateWorkspaceLastUsedAtParams) error {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
workspace.LastUsedAt = arg.LastUsedAt
|
|
|
|
q.workspaces[index] = workspace
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return sql.ErrNoRows
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceProxy(_ context.Context, arg database.UpdateWorkspaceProxyParams) (database.WorkspaceProxy, error) {
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, p := range q.workspaceProxies {
|
|
|
|
if p.Name == arg.Name && p.ID != arg.ID {
|
|
|
|
return database.WorkspaceProxy{}, errDuplicateKey
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for i, p := range q.workspaceProxies {
|
|
|
|
if p.ID == arg.ID {
|
|
|
|
p.Name = arg.Name
|
|
|
|
p.DisplayName = arg.DisplayName
|
|
|
|
p.Icon = arg.Icon
|
|
|
|
if len(p.TokenHashedSecret) > 0 {
|
|
|
|
p.TokenHashedSecret = arg.TokenHashedSecret
|
|
|
|
}
|
|
|
|
q.workspaceProxies[i] = p
|
|
|
|
return p, nil
|
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
return database.WorkspaceProxy{}, sql.ErrNoRows
|
|
|
|
}
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceProxyDeleted(_ context.Context, arg database.UpdateWorkspaceProxyDeletedParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for i, p := range q.workspaceProxies {
|
|
|
|
if p.ID == arg.ID {
|
|
|
|
p.Deleted = arg.Deleted
|
2023-09-01 16:50:12 +00:00
|
|
|
p.UpdatedAt = dbtime.Now()
|
2023-07-13 17:12:29 +00:00
|
|
|
q.workspaceProxies[i] = p
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspaceTTL(_ context.Context, arg database.UpdateWorkspaceTTLParams) error {
|
2023-06-12 22:40:58 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return err
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
2022-10-10 20:37:06 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for index, workspace := range q.workspaces {
|
|
|
|
if workspace.ID != arg.ID {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
workspace.Ttl = arg.Ttl
|
|
|
|
q.workspaces[index] = workspace
|
2023-06-12 22:40:58 +00:00
|
|
|
return nil
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
2023-06-12 22:40:58 +00:00
|
|
|
return sql.ErrNoRows
|
2022-10-10 20:37:06 +00:00
|
|
|
}
|
|
|
|
|
2023-08-24 18:25:54 +00:00
|
|
|
func (q *FakeQuerier) UpdateWorkspacesDormantDeletingAtByTemplateID(_ context.Context, arg database.UpdateWorkspacesDormantDeletingAtByTemplateIDParams) error {
|
2023-07-21 03:01:11 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
for i, ws := range q.workspaces {
|
2023-08-22 20:15:13 +00:00
|
|
|
if ws.TemplateID != arg.TemplateID {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-08-24 18:25:54 +00:00
|
|
|
if ws.DormantAt.Time.IsZero() {
|
2023-07-21 03:01:11 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-08-22 20:15:13 +00:00
|
|
|
|
2023-08-24 18:25:54 +00:00
|
|
|
if !arg.DormantAt.IsZero() {
|
|
|
|
ws.DormantAt = sql.NullTime{
|
2023-08-22 20:15:13 +00:00
|
|
|
Valid: true,
|
2023-08-24 18:25:54 +00:00
|
|
|
Time: arg.DormantAt,
|
2023-08-22 20:15:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-21 03:01:11 +00:00
|
|
|
deletingAt := sql.NullTime{
|
2023-08-24 18:25:54 +00:00
|
|
|
Valid: arg.TimeTilDormantAutodeleteMs > 0,
|
2023-07-21 03:01:11 +00:00
|
|
|
}
|
2023-08-24 18:25:54 +00:00
|
|
|
if arg.TimeTilDormantAutodeleteMs > 0 {
|
|
|
|
deletingAt.Time = ws.DormantAt.Time.Add(time.Duration(arg.TimeTilDormantAutodeleteMs) * time.Millisecond)
|
2023-07-21 03:01:11 +00:00
|
|
|
}
|
|
|
|
ws.DeletingAt = deletingAt
|
|
|
|
q.workspaces[i] = ws
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertAppSecurityKey(_ context.Context, data string) error {
|
2022-10-17 13:43:30 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.appSecurityKey = data
|
|
|
|
return nil
|
2022-10-17 13:43:30 +00:00
|
|
|
}
|
|
|
|
|
2023-09-27 15:02:18 +00:00
|
|
|
func (q *FakeQuerier) UpsertApplicationName(_ context.Context, data string) error {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
q.applicationName = data
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertDefaultProxy(_ context.Context, arg database.UpsertDefaultProxyParams) error {
|
|
|
|
q.defaultProxyDisplayName = arg.DisplayName
|
|
|
|
q.defaultProxyIconURL = arg.IconUrl
|
|
|
|
return nil
|
|
|
|
}
|
2023-06-30 12:38:48 +00:00
|
|
|
|
2023-11-23 16:18:12 +00:00
|
|
|
func (q *FakeQuerier) UpsertHealthSettings(_ context.Context, data string) error {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
q.healthSettings = []byte(data)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2024-01-30 01:30:02 +00:00
|
|
|
func (q *FakeQuerier) UpsertJFrogXrayScanByWorkspaceAndAgentID(_ context.Context, arg database.UpsertJFrogXrayScanByWorkspaceAndAgentIDParams) error {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, scan := range q.jfrogXRayScans {
|
|
|
|
if scan.AgentID == arg.AgentID && scan.WorkspaceID == arg.WorkspaceID {
|
|
|
|
scan.Critical = arg.Critical
|
|
|
|
scan.High = arg.High
|
|
|
|
scan.Medium = arg.Medium
|
|
|
|
scan.ResultsUrl = arg.ResultsUrl
|
|
|
|
q.jfrogXRayScans[i] = scan
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
//nolint:gosimple
|
|
|
|
q.jfrogXRayScans = append(q.jfrogXRayScans, database.JfrogXrayScan{
|
|
|
|
WorkspaceID: arg.WorkspaceID,
|
|
|
|
AgentID: arg.AgentID,
|
|
|
|
Critical: arg.Critical,
|
|
|
|
High: arg.High,
|
|
|
|
Medium: arg.Medium,
|
|
|
|
ResultsUrl: arg.ResultsUrl,
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertLastUpdateCheck(_ context.Context, data string) error {
|
2023-06-30 12:38:48 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.lastUpdateCheck = []byte(data)
|
|
|
|
return nil
|
2023-06-30 12:38:48 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertLogoURL(_ context.Context, data string) error {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
q.logoURL = data
|
|
|
|
return nil
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertOAuthSigningKey(_ context.Context, value string) error {
|
2022-10-17 13:43:30 +00:00
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.oauthSigningKey = value
|
|
|
|
return nil
|
2022-10-17 13:43:30 +00:00
|
|
|
}
|
|
|
|
|
2023-12-13 12:31:40 +00:00
|
|
|
func (q *FakeQuerier) UpsertProvisionerDaemon(_ context.Context, arg database.UpsertProvisionerDaemonParams) (database.ProvisionerDaemon, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.ProvisionerDaemon{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
for _, d := range q.provisionerDaemons {
|
|
|
|
if d.Name == arg.Name {
|
|
|
|
if d.Tags[provisionersdk.TagScope] == provisionersdk.ScopeOrganization && arg.Tags[provisionersdk.TagOwner] != "" {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if d.Tags[provisionersdk.TagScope] == provisionersdk.ScopeUser && arg.Tags[provisionersdk.TagOwner] != d.Tags[provisionersdk.TagOwner] {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
d.Provisioners = arg.Provisioners
|
2023-12-14 18:23:29 +00:00
|
|
|
d.Tags = maps.Clone(arg.Tags)
|
2023-12-13 12:31:40 +00:00
|
|
|
d.Version = arg.Version
|
|
|
|
d.LastSeenAt = arg.LastSeenAt
|
|
|
|
return d, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
d := database.ProvisionerDaemon{
|
|
|
|
ID: uuid.New(),
|
|
|
|
CreatedAt: arg.CreatedAt,
|
|
|
|
Name: arg.Name,
|
|
|
|
Provisioners: arg.Provisioners,
|
2023-12-14 18:23:29 +00:00
|
|
|
Tags: maps.Clone(arg.Tags),
|
2023-12-13 12:31:40 +00:00
|
|
|
ReplicaID: uuid.NullUUID{},
|
|
|
|
LastSeenAt: arg.LastSeenAt,
|
|
|
|
Version: arg.Version,
|
2024-01-03 09:01:57 +00:00
|
|
|
APIVersion: arg.APIVersion,
|
2023-12-13 12:31:40 +00:00
|
|
|
}
|
|
|
|
q.provisionerDaemons = append(q.provisionerDaemons, d)
|
|
|
|
return d, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) UpsertServiceBanner(_ context.Context, data string) error {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.serviceBanner = []byte(data)
|
|
|
|
return nil
|
|
|
|
}
|
2022-10-17 13:43:30 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (*FakeQuerier) UpsertTailnetAgent(context.Context, database.UpsertTailnetAgentParams) (database.TailnetAgent, error) {
|
|
|
|
return database.TailnetAgent{}, ErrUnimplemented
|
|
|
|
}
|
2022-10-17 13:43:30 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (*FakeQuerier) UpsertTailnetClient(context.Context, database.UpsertTailnetClientParams) (database.TailnetClient, error) {
|
|
|
|
return database.TailnetClient{}, ErrUnimplemented
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-09-21 19:30:48 +00:00
|
|
|
func (*FakeQuerier) UpsertTailnetClientSubscription(context.Context, database.UpsertTailnetClientSubscriptionParams) error {
|
|
|
|
return ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (*FakeQuerier) UpsertTailnetCoordinator(context.Context, uuid.UUID) (database.TailnetCoordinator, error) {
|
|
|
|
return database.TailnetCoordinator{}, ErrUnimplemented
|
2022-10-17 13:43:30 +00:00
|
|
|
}
|
2022-10-25 00:46:24 +00:00
|
|
|
|
2023-11-15 06:13:27 +00:00
|
|
|
func (*FakeQuerier) UpsertTailnetPeer(_ context.Context, arg database.UpsertTailnetPeerParams) (database.TailnetPeer, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.TailnetPeer{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.TailnetPeer{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
|
|
|
func (*FakeQuerier) UpsertTailnetTunnel(_ context.Context, arg database.UpsertTailnetTunnelParams) (database.TailnetTunnel, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.TailnetTunnel{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return database.TailnetTunnel{}, ErrUnimplemented
|
|
|
|
}
|
|
|
|
|
2024-02-13 14:31:20 +00:00
|
|
|
func (q *FakeQuerier) UpsertWorkspaceAgentPortShare(_ context.Context, arg database.UpsertWorkspaceAgentPortShareParams) (database.WorkspaceAgentPortShare, error) {
|
|
|
|
err := validateDatabaseType(arg)
|
|
|
|
if err != nil {
|
|
|
|
return database.WorkspaceAgentPortShare{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
q.mutex.Lock()
|
|
|
|
defer q.mutex.Unlock()
|
|
|
|
|
|
|
|
for i, share := range q.workspaceAgentPortShares {
|
|
|
|
if share.WorkspaceID == arg.WorkspaceID && share.Port == arg.Port && share.AgentName == arg.AgentName {
|
|
|
|
share.ShareLevel = arg.ShareLevel
|
|
|
|
q.workspaceAgentPortShares[i] = share
|
|
|
|
return share, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
//nolint:gosimple // casts are not a simplification
|
|
|
|
psl := database.WorkspaceAgentPortShare{
|
|
|
|
WorkspaceID: arg.WorkspaceID,
|
|
|
|
AgentName: arg.AgentName,
|
|
|
|
Port: arg.Port,
|
|
|
|
ShareLevel: arg.ShareLevel,
|
|
|
|
}
|
|
|
|
q.workspaceAgentPortShares = append(q.workspaceAgentPortShares, psl)
|
|
|
|
|
|
|
|
return psl, nil
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetAuthorizedTemplates(ctx context.Context, arg database.GetTemplatesWithFilterParams, prepared rbac.PreparedAuthorized) ([]database.Template, error) {
|
2023-01-23 11:14:47 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, err
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// Call this to match the same function calls as the SQL implementation.
|
|
|
|
if prepared != nil {
|
|
|
|
_, err := prepared.CompileToSQL(ctx, rbac.ConfigWithACL())
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2022-10-25 00:46:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
var templates []database.Template
|
2023-07-19 20:07:33 +00:00
|
|
|
for _, templateTable := range q.templates {
|
|
|
|
template := q.templateWithUserNoLock(templateTable)
|
2023-07-13 17:12:29 +00:00
|
|
|
if prepared != nil && prepared.Authorize(ctx, template.RBACObject()) != nil {
|
|
|
|
continue
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if template.Deleted != arg.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if arg.OrganizationID != uuid.Nil && template.OrganizationID != arg.OrganizationID {
|
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.ExactName != "" && !strings.EqualFold(template.Name, arg.ExactName) {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-11-20 19:16:18 +00:00
|
|
|
if arg.Deprecated.Valid && arg.Deprecated.Bool == (template.Deprecated != "") {
|
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
if len(arg.IDs) > 0 {
|
|
|
|
match := false
|
|
|
|
for _, id := range arg.IDs {
|
|
|
|
if template.ID == id {
|
|
|
|
match = true
|
|
|
|
break
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
if !match {
|
|
|
|
continue
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
}
|
2023-07-19 20:07:33 +00:00
|
|
|
templates = append(templates, template)
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
if len(templates) > 0 {
|
2023-08-09 19:50:26 +00:00
|
|
|
slices.SortFunc(templates, func(a, b database.Template) int {
|
|
|
|
if a.Name != b.Name {
|
|
|
|
return slice.Ascending(a.Name, b.Name)
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
2023-08-09 19:50:26 +00:00
|
|
|
return slice.Ascending(a.ID.String(), b.ID.String())
|
2023-07-13 17:12:29 +00:00
|
|
|
})
|
|
|
|
return templates, nil
|
2022-10-25 00:46:24 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, sql.ErrNoRows
|
2022-10-25 00:46:24 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateGroupRoles(_ context.Context, id uuid.UUID) ([]database.TemplateGroup, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
var template database.TemplateTable
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, t := range q.templates {
|
|
|
|
if t.ID == id {
|
|
|
|
template = t
|
|
|
|
break
|
|
|
|
}
|
2023-01-23 11:14:47 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if template.ID == uuid.Nil {
|
|
|
|
return nil, sql.ErrNoRows
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
groups := make([]database.TemplateGroup, 0, len(template.GroupACL))
|
|
|
|
for k, v := range template.GroupACL {
|
|
|
|
group, err := q.getGroupByIDNoLock(context.Background(), uuid.MustParse(k))
|
|
|
|
if err != nil && !xerrors.Is(err, sql.ErrNoRows) {
|
|
|
|
return nil, xerrors.Errorf("get group by ID: %w", err)
|
|
|
|
}
|
|
|
|
// We don't delete groups from the map if they
|
|
|
|
// get deleted so just skip.
|
|
|
|
if xerrors.Is(err, sql.ErrNoRows) {
|
2022-10-25 00:46:24 +00:00
|
|
|
continue
|
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
groups = append(groups, database.TemplateGroup{
|
|
|
|
Group: group,
|
|
|
|
Actions: v,
|
|
|
|
})
|
2022-10-25 00:46:24 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
return groups, nil
|
2022-10-25 00:46:24 +00:00
|
|
|
}
|
2022-11-14 17:57:33 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetTemplateUserRoles(_ context.Context, id uuid.UUID) ([]database.TemplateUser, error) {
|
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-04-20 11:53:34 +00:00
|
|
|
|
2023-07-19 20:07:33 +00:00
|
|
|
var template database.TemplateTable
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, t := range q.templates {
|
|
|
|
if t.ID == id {
|
|
|
|
template = t
|
|
|
|
break
|
2022-11-14 17:57:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if template.ID == uuid.Nil {
|
|
|
|
return nil, sql.ErrNoRows
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
users := make([]database.TemplateUser, 0, len(template.UserACL))
|
|
|
|
for k, v := range template.UserACL {
|
|
|
|
user, err := q.getUserByIDNoLock(uuid.MustParse(k))
|
|
|
|
if err != nil && xerrors.Is(err, sql.ErrNoRows) {
|
|
|
|
return nil, xerrors.Errorf("get user by ID: %w", err)
|
|
|
|
}
|
|
|
|
// We don't delete users from the map if they
|
|
|
|
// get deleted so just skip.
|
|
|
|
if xerrors.Is(err, sql.ErrNoRows) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if user.Deleted || user.Status == database.UserStatusSuspended {
|
|
|
|
continue
|
2022-11-14 17:57:33 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
|
|
|
|
users = append(users, database.TemplateUser{
|
|
|
|
User: user,
|
|
|
|
Actions: v,
|
|
|
|
})
|
2022-11-14 17:57:33 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return users, nil
|
2022-11-14 17:57:33 +00:00
|
|
|
}
|
2023-01-24 12:24:27 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
func (q *FakeQuerier) GetAuthorizedWorkspaces(ctx context.Context, arg database.GetWorkspacesParams, prepared rbac.PreparedAuthorized) ([]database.GetWorkspacesRow, error) {
|
2023-01-24 12:24:27 +00:00
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
2023-07-13 17:12:29 +00:00
|
|
|
return nil, err
|
2023-01-24 12:24:27 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if prepared != nil {
|
|
|
|
// Call this to match the same function calls as the SQL implementation.
|
|
|
|
_, err := prepared.CompileToSQL(ctx, rbac.ConfigWithoutACL())
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2023-01-24 12:24:27 +00:00
|
|
|
}
|
|
|
|
}
|
2023-03-23 19:09:13 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces := make([]database.Workspace, 0)
|
|
|
|
for _, workspace := range q.workspaces {
|
|
|
|
if arg.OwnerID != uuid.Nil && workspace.OwnerID != arg.OwnerID {
|
|
|
|
continue
|
|
|
|
}
|
2023-03-23 19:09:13 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.OwnerUsername != "" {
|
|
|
|
owner, err := q.getUserByIDNoLock(workspace.OwnerID)
|
|
|
|
if err == nil && !strings.EqualFold(arg.OwnerUsername, owner.Username) {
|
|
|
|
continue
|
|
|
|
}
|
2023-03-23 19:09:13 +00:00
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.TemplateName != "" {
|
|
|
|
template, err := q.getTemplateByIDNoLock(ctx, workspace.TemplateID)
|
|
|
|
if err == nil && !strings.EqualFold(arg.TemplateName, template.Name) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2024-01-23 17:52:06 +00:00
|
|
|
if arg.UsingActive.Valid {
|
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspace.ID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get latest build: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
template, err := q.getTemplateByIDNoLock(ctx, workspace.TemplateID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get template: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
updated := build.TemplateVersionID == template.ActiveVersionID
|
|
|
|
if arg.UsingActive.Bool != updated {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if !arg.Deleted && workspace.Deleted {
|
|
|
|
continue
|
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.Name != "" && !strings.Contains(strings.ToLower(workspace.Name), strings.ToLower(arg.Name)) {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
|
|
|
|
2023-08-22 13:41:58 +00:00
|
|
|
if !arg.LastUsedBefore.IsZero() {
|
|
|
|
if workspace.LastUsedAt.After(arg.LastUsedBefore) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if !arg.LastUsedAfter.IsZero() {
|
|
|
|
if workspace.LastUsedAt.Before(arg.LastUsedAfter) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.Status != "" {
|
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspace.ID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get latest build: %w", err)
|
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, build.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get provisioner job: %w", err)
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// This logic should match the logic in the workspace.sql file.
|
|
|
|
var statusMatch bool
|
|
|
|
switch database.WorkspaceStatus(arg.Status) {
|
|
|
|
case database.WorkspaceStatusStarting:
|
2023-10-05 01:57:46 +00:00
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusRunning &&
|
2023-07-13 17:12:29 +00:00
|
|
|
build.Transition == database.WorkspaceTransitionStart
|
|
|
|
case database.WorkspaceStatusStopping:
|
2023-10-05 01:57:46 +00:00
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusRunning &&
|
2023-07-13 17:12:29 +00:00
|
|
|
build.Transition == database.WorkspaceTransitionStop
|
|
|
|
case database.WorkspaceStatusDeleting:
|
2023-10-05 01:57:46 +00:00
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusRunning &&
|
2023-07-13 17:12:29 +00:00
|
|
|
build.Transition == database.WorkspaceTransitionDelete
|
|
|
|
|
2023-10-05 01:57:46 +00:00
|
|
|
case "started":
|
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusSucceeded &&
|
|
|
|
build.Transition == database.WorkspaceTransitionStart
|
|
|
|
case database.WorkspaceStatusDeleted:
|
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusSucceeded &&
|
|
|
|
build.Transition == database.WorkspaceTransitionDelete
|
|
|
|
case database.WorkspaceStatusStopped:
|
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusSucceeded &&
|
|
|
|
build.Transition == database.WorkspaceTransitionStop
|
|
|
|
case database.WorkspaceStatusRunning:
|
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatusSucceeded &&
|
|
|
|
build.Transition == database.WorkspaceTransitionStart
|
2023-07-13 17:12:29 +00:00
|
|
|
default:
|
2023-10-05 01:57:46 +00:00
|
|
|
statusMatch = job.JobStatus == database.ProvisionerJobStatus(arg.Status)
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
|
|
|
if !statusMatch {
|
|
|
|
continue
|
|
|
|
}
|
2023-04-17 19:57:21 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.HasAgent != "" {
|
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, workspace.ID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get latest build: %w", err)
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, build.JobID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get provisioner job: %w", err)
|
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaceResources, err := q.getWorkspaceResourcesByJobIDNoLock(ctx, job.ID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get workspace resources: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
var workspaceResourceIDs []uuid.UUID
|
|
|
|
for _, wr := range workspaceResources {
|
|
|
|
workspaceResourceIDs = append(workspaceResourceIDs, wr.ID)
|
|
|
|
}
|
|
|
|
|
|
|
|
workspaceAgents, err := q.getWorkspaceAgentsByResourceIDsNoLock(ctx, workspaceResourceIDs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, xerrors.Errorf("get workspace agents: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
var hasAgentMatched bool
|
|
|
|
for _, wa := range workspaceAgents {
|
|
|
|
if mapAgentStatus(wa, arg.AgentInactiveDisconnectTimeoutSeconds) == arg.HasAgent {
|
|
|
|
hasAgentMatched = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if !hasAgentMatched {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-12-08 00:09:35 +00:00
|
|
|
if arg.Dormant && !workspace.DormantAt.Valid {
|
2023-08-04 00:46:02 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2023-07-21 18:00:19 +00:00
|
|
|
if len(arg.TemplateIDs) > 0 {
|
2023-07-13 17:12:29 +00:00
|
|
|
match := false
|
2023-07-21 18:00:19 +00:00
|
|
|
for _, id := range arg.TemplateIDs {
|
2023-07-13 17:12:29 +00:00
|
|
|
if workspace.TemplateID == id {
|
|
|
|
match = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if !match {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the filter exists, ensure the object is authorized.
|
|
|
|
if prepared != nil && prepared.Authorize(ctx, workspace.RBACObject()) != nil {
|
2023-06-12 22:40:58 +00:00
|
|
|
continue
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = append(workspaces, workspace)
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// Sort workspaces (ORDER BY)
|
|
|
|
isRunning := func(build database.WorkspaceBuild, job database.ProvisionerJob) bool {
|
|
|
|
return job.CompletedAt.Valid && !job.CanceledAt.Valid && !job.Error.Valid && build.Transition == database.WorkspaceTransitionStart
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
preloadedWorkspaceBuilds := map[uuid.UUID]database.WorkspaceBuild{}
|
|
|
|
preloadedProvisionerJobs := map[uuid.UUID]database.ProvisionerJob{}
|
|
|
|
preloadedUsers := map[uuid.UUID]database.User{}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
for _, w := range workspaces {
|
|
|
|
build, err := q.getLatestWorkspaceBuildByWorkspaceIDNoLock(ctx, w.ID)
|
|
|
|
if err == nil {
|
|
|
|
preloadedWorkspaceBuilds[w.ID] = build
|
|
|
|
} else if !errors.Is(err, sql.ErrNoRows) {
|
|
|
|
return nil, xerrors.Errorf("get latest build: %w", err)
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
job, err := q.getProvisionerJobByIDNoLock(ctx, build.JobID)
|
|
|
|
if err == nil {
|
|
|
|
preloadedProvisionerJobs[w.ID] = job
|
|
|
|
} else if !errors.Is(err, sql.ErrNoRows) {
|
|
|
|
return nil, xerrors.Errorf("get provisioner job: %w", err)
|
|
|
|
}
|
2023-06-28 21:12:49 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
user, err := q.getUserByIDNoLock(w.OwnerID)
|
|
|
|
if err == nil {
|
|
|
|
preloadedUsers[w.ID] = user
|
|
|
|
} else if !errors.Is(err, sql.ErrNoRows) {
|
|
|
|
return nil, xerrors.Errorf("get user: %w", err)
|
2023-06-28 21:12:49 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
sort.Slice(workspaces, func(i, j int) bool {
|
|
|
|
w1 := workspaces[i]
|
|
|
|
w2 := workspaces[j]
|
2023-06-28 21:12:49 +00:00
|
|
|
|
2024-01-24 13:39:19 +00:00
|
|
|
// Order by: favorite first
|
|
|
|
if arg.RequesterID == w1.OwnerID && w1.Favorite {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
if arg.RequesterID == w2.OwnerID && w2.Favorite {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// Order by: running
|
2023-07-13 17:12:29 +00:00
|
|
|
w1IsRunning := isRunning(preloadedWorkspaceBuilds[w1.ID], preloadedProvisionerJobs[w1.ID])
|
|
|
|
w2IsRunning := isRunning(preloadedWorkspaceBuilds[w2.ID], preloadedProvisionerJobs[w2.ID])
|
2023-05-09 18:46:50 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if w1IsRunning && !w2IsRunning {
|
|
|
|
return true
|
2023-05-09 18:46:50 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if !w1IsRunning && w2IsRunning {
|
|
|
|
return false
|
2023-05-09 18:46:50 +00:00
|
|
|
}
|
2023-04-04 20:07:29 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// Order by: usernames
|
2024-01-24 13:39:19 +00:00
|
|
|
if strings.Compare(preloadedUsers[w1.ID].Username, preloadedUsers[w2.ID].Username) < 0 {
|
|
|
|
return true
|
2023-04-04 20:07:29 +00:00
|
|
|
}
|
2023-06-05 23:12:10 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// Order by: workspace names
|
2024-01-24 13:39:19 +00:00
|
|
|
return strings.Compare(w1.Name, w2.Name) < 0
|
2023-07-13 17:12:29 +00:00
|
|
|
})
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
beforePageCount := len(workspaces)
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
if arg.Offset > 0 {
|
|
|
|
if int(arg.Offset) > len(workspaces) {
|
|
|
|
return []database.GetWorkspacesRow{}, nil
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
2023-07-13 17:12:29 +00:00
|
|
|
workspaces = workspaces[arg.Offset:]
|
|
|
|
}
|
|
|
|
if arg.Limit > 0 {
|
|
|
|
if int(arg.Limit) > len(workspaces) {
|
|
|
|
return q.convertToWorkspaceRowsNoLock(ctx, workspaces, int64(beforePageCount)), nil
|
|
|
|
}
|
|
|
|
workspaces = workspaces[:arg.Limit]
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
return q.convertToWorkspaceRowsNoLock(ctx, workspaces, int64(beforePageCount)), nil
|
2023-06-05 23:12:10 +00:00
|
|
|
}
|
|
|
|
|
2023-07-17 13:44:58 +00:00
|
|
|
func (q *FakeQuerier) GetAuthorizedUsers(ctx context.Context, arg database.GetUsersParams, prepared rbac.PreparedAuthorized) ([]database.GetUsersRow, error) {
|
|
|
|
if err := validateDatabaseType(arg); err != nil {
|
|
|
|
return nil, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
|
2023-07-13 17:12:29 +00:00
|
|
|
// Call this to match the same function calls as the SQL implementation.
|
|
|
|
if prepared != nil {
|
2023-07-17 13:44:58 +00:00
|
|
|
_, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
|
|
|
|
VariableConverter: regosql.UserConverter(),
|
|
|
|
})
|
2023-07-13 17:12:29 +00:00
|
|
|
if err != nil {
|
2023-07-17 13:44:58 +00:00
|
|
|
return nil, err
|
2023-06-12 22:40:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-17 13:44:58 +00:00
|
|
|
users, err := q.GetUsers(ctx, arg)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-17 13:44:58 +00:00
|
|
|
q.mutex.RLock()
|
|
|
|
defer q.mutex.RUnlock()
|
|
|
|
|
|
|
|
filteredUsers := make([]database.GetUsersRow, 0, len(users))
|
|
|
|
for _, user := range users {
|
2023-07-13 17:12:29 +00:00
|
|
|
// If the filter exists, ensure the object is authorized.
|
|
|
|
if prepared != nil && prepared.Authorize(ctx, user.RBACObject()) != nil {
|
|
|
|
continue
|
|
|
|
}
|
2023-06-12 22:40:58 +00:00
|
|
|
|
2023-07-17 13:44:58 +00:00
|
|
|
filteredUsers = append(filteredUsers, user)
|
2023-07-13 17:12:29 +00:00
|
|
|
}
|
2023-07-17 13:44:58 +00:00
|
|
|
return filteredUsers, nil
|
2023-06-21 12:20:58 +00:00
|
|
|
}
|