Craft

Deliveroo Summary

Overview

Deliveroo is a technology company focused on marketing, selling and delivering restaurant meals to the household or office. Its technology platform optimizes food ordering and delivery by integrating web and mobile consumers with restaurant tablet-based point-of-sale order management terminals and logistics optimization algorithm via its delivery driver smartphone software.

TypePrivate
Founded2012
HQLondon, GBMap
Websitedeliveroo.co.uk
Employee Ratings
3.7
More

Locations

Deliveroo is headquartered in
London, United Kingdom

Location Map

Latest Updates

Company Growth (employees)

Employees (est.) (Apr 2021)6,238(+4%)
Job Openings178
Website Visits (Feb 2021)9 m(-6%)
Revenue (FY, 2017)£277.1 M(+116%)
Cybersecurity ratingBMore

Key People/Management at Deliveroo

Deliveroo Office Locations

Deliveroo has offices in London, Los Angeles, Balaclava, Brussel and in 8 other locations

Deliveroo Financials and Metrics

Deliveroo Revenue

Summary Metrics

Founding Date

2012

Deliveroo total Funding

$1.7 b

Deliveroo latest funding size

$180 m

Time since last funding

4 months ago

Deliveroo investors

Index Ventures, Accel Partners, DST Global, T. Rowe Price, General Catalyst, Amazon, Fidelity, 14W, JamJar Investments, Hoxton Ventures, Hummingbird Ventures, Greenoaks Capital, Fidelity Management and Research Company, Felix Capital, Greg Marsh, Bridgepoint, Rancilio Cube, GR Capital, NGP Capital, H14, Entrée Capital, GC Capital, Greyhound Capital, Angel Capital Management, Future Fifty, Greenoaks, Arnaud Bertrand, Durable Capital

Deliveroo's latest funding round in January 2021 was reported to be $180 m. In total, Deliveroo has raised $1.7 b. Deliveroo's latest valuation is reported to be $800 m.

Deliveroo's revenue was reported to be £277.14 m in FY, 2017 which is a 115.6% increase from the previous period.

GBP

Revenue (FY, 2017)£277.14 m
Revenue growth (FY, 2017 - FY, 2016), %115.6%
Gross profit (FY, 2017)£64.31 m
Gross profit margin (FY, 2017), %23.2%
Net income (FY, 2017)(£183.53 m)
EBITDA (FY, 2017) (£165.06 m)
EBIT (FY, 2017) (£177.99 m)
Cash (31-Dec-2018)£184.56 m
EV $6.86 b

GBPFY, 2015FY, 2016FY, 2017FY, 2018
Revenue18.1 m128.6 m277.1 m
Revenue growth, %611%116%
Cost of goods sold19.5 m127.5 m212.8 m476 m
Gross profit(1.4 m)1.1 m64.3 m
GBPSep, 2018
Cost of goods sold227 m
Operating expense total106 m
Pre tax profit(185 m)
show all

GBPFY, 2013FY, 2014FY, 2015FY, 2016FY, 2017FY, 2018
Cash132.3 k16.5 m90.9 m179.8 m380 m184.6 m
Accounts Receivable72.8 k1.5 m10.1 m19.7 m35.2 m
Inventories5.3 m7.2 m
Current Assets132.3 k16.6 m95.7 m197 m409.4 m243.1 m
show all

GBPFY, 2014FY, 2015FY, 2016FY, 2017
Net Income(30.1 m)(129.1 m)(183.5 m)
Depreciation and Amortization705.4 k5.1 m
Cash From Operating Activities(967.7 k)(24 m)(111.1 m)(126 m)
Purchases of PP&E(4.2 m)(14.9 m)
show all

GBP
FY, 2018
Financial Leverage1.7 x

Revenue Breakdown

Deliveroo revenue breakdown by business segment: 99.8% from Provision of services and 0.2% from Other

Deliveroo Operating Metrics

Deliveroo's Active Cities was reported to be 500 in May, 2019.

Apr, 2015Aug, 2016FY, 2016May, 2017Aug, 2017Sep, 2017Nov, 2017Feb, 2018May, 2018Jun, 2018May, 2019
Customers74 k
Drivers45020 k30 k35 k35 k60 k
Restaurants16 k25 k20 k25 k80 k
Active Cities84120140200268500

Deliveroo Acquisitions / Subsidiaries

Company NameDateDeal Size
CultivateJuly 31, 2019
MapleMay 08, 2017
Deliveroo Australia
Deliveroo Belgium SPRL
Deliveroo DMCC
Deliveroo France SAS
Deliveroo Germany GMBH
Deliveroo Hong Kong Limeted
Deliveroo Ireland Limeted
Deliveroo Italy SRL

Deliveroo Hiring Categories

Deliveroo Cybersecurity Score

Cybersecurity ratingPremium dataset

B

86/100

SecurityScorecard logo

Deliveroo Website Traffic

Alexa Website Rank

Total Visits per monthSimilarWeb

Deliveroo Online and Social Media Presence

Twitter followers

66.47 k Twitter followers

6 Months

Deliveroo has 66.47 k Twitter Followers. The number of followers has increased 0.30% month over month and increased 2.14% quarter over quarter.

Deliveroo's Trends

Search term - Deliveroo

Twitter Engagement Stats for @deliveroo

  • 32.91 k

    Tweets

  • 136

    Following

  • 66.47 k

    Followers

  • 191

    Tweets last 30 days

  • 2.2

    Avg. likes per Tweet

  • 51.3%

    Tweets with engagement

Deliveroo Technology StackBuildWith Logo

  • ads

    19 products used

    • Advertising.com
    • AppNexus
    • Atlas
      • Bizo
      • DoubleClick.Net
      • Facebook Custom Audiences
      • Google Floodlight Counter
      • Google Floodlight Sales
      • Google Remarketing
      • Index Exchange
      • LinkedIn Ads
      • Nanigans
      • Rubicon Project
      • SkimLinks
      • Snap Pixel
      • Tapad
      • The Trade Desk
      • Twitter Ads
      • Yahoo Small Business
  • analytics

    25 products used

    • Bizo Insights
    • DoubleClick Floodlight
    • Facebook Pixel
      • Facebook Signal
      • Fastly
      • Global Site Tag
      • Google AdWords Conversion
      • Google Analytics
      • Google Analytics Classic
      • Google Analytics Ecommerce
      • Google Conversion Linker
      • Google Conversion Tracking
      • Google Universal Analytics
      • Hubspot
      • LinkedIn Insights
      • Mixpanel
      • New Relic
      • Rapleaf
      • Salesforce
      • Segment
      • Twitter Analytics
      • Twitter Conversion Tracking
      • Twitter Website Universal Tag
      • Yahoo Dot
      • Yahoo Web Analytics
  • cdn

    6 products used

    • AJAX Libraries API
    • CDN JS
    • Cloudflare
      • Content Delivery Network
      • GStatic Google Static Content
      • Yahoo Image CDN
  • cdns

    2 products used

    • Amazon CloudFront
    • Fastly Verified CDN
Learn more on BuiltWith

Deliveroo News and Updates

May 07, 2021
Voi Shakes Up Leadership With Former Johnson Advisor And Deliveroo Director
_3xOCqE-scooter sharing start-up Voi has shaken up its management in the UK and Ireland with a number of new appointments including a former advisor to Boris Johnson and a commercial director at Deliveroo.
May 04, 2021
Global Online Food Delivery Services Market Report 2021 Featuring Market Leaders - takeaway.com, Doordash, Deliveroo, Uber Eats, Zomato, Swiggy, Domino's Pizza, Grubhub, foodpanda, and Just-Eat
_3xOCqDublin, May 04, 2021 (GLOBE NEWSWIRE) -- The "Online Food Delivery Services Global Market Report 2021: COVID-19 Growth and Change to 2030" report has been added to ResearchAndMarkets.com's offering. This report focuses on the online food delivery services market which is experiencing strong growth. The report gives a guide to the online food delivery services market which will be shaping and changing our lives over the next ten years and beyond, including the market's response to the challenge of the global pandemic. Major players in the online food delivery services market are takeaway.com, Doordash, Deliveroo, Uber eats, Zomato, Swiggy, Domino's pizza, Grubhub, foodpanda, and Just eat.The global online food delivery services market is expected grow from $115.07 billion in 2020 to $126.91 billion in 2021 at a compound annual growth rate (CAGR) of 10.3%. The growth is mainly due to the companies resuming their operations and adapting to the new normal while recovering from the COVID-19 impact, which had earlier led to restrictive containment measures involving social distancing, remote working, and the closure of commercial activities that resulted in operational challenges. The market is expected to reach $192.16 billion in 2025 at a CAGR of 11%.The global online food delivery services market covered in this report is segmented by type into platform-to-customer, restaurant -to-customer; by channel type into websites, mobile applications and by payment method into cash on delivery, online payment.Cost of supply chain and logistics will be the key restraint for the online food delivery services market. This cost includes the cost incurred for order fulfilment, delivery cost, adjusting business resources to dynamic market demand and last-mile connectivity. Besides, there are costs of cardboard boxes for packaging, gas, mileage and the cost for hiring a driver. The supply chain and logistics has to be in place in order to avoid the spoilage of products with limited shelf life.
Apr 27, 2021
Waitrose expands Deliveroo partnership to 150 stores, creating 400 jobs
_3xOCqWaitrose has expanded its partnership with Deliveroo as it aims to ramps up its online delivery capabilities, in a move The post Waitrose expands Deliveroo partnership to 150 stores, creating 400 jobs appeared first on CityAM.
Apr 22, 2021
Hedge fund Odey takes short position against Deliveroo after disastrous IPO
Odey Asset Management has taken a short position against Deliveroo after a disastrous start to the delivery firm’s London IPO. The post Hedge fund Odey takes short position against Deliveroo after disastrous IPO appeared first on CityAM.
Apr 22, 2021
Odey Asset Management Unit Takes Short Position in Deliveroo
_3xOCqOdey Asset Management Unit Takes Short Position in Deliveroo
Apr 19, 2021
London Needs A Tech IPO Hit To Forget Its Deliveroo Debacle
_3xOCqLondon Needs A Tech IPO Hit To Forget Its Deliveroo Debacle

Deliveroo Blogs

Sept 07, 2020
Increase the Reliability of a Go Codebase with Object Constructors
Coming from a heavy production experience with languages such as C# and TypeScript, I must admit that my journey with Go has been a bumpy ride so far, but it’s for sure a positive one overall. Go certainly shines in some parts such as its runtime efficiency, built-in tooling support and its simplicity which allows you to get up to speed with it so quickly! However, there are some areas where it limits your ability to express and model your software in code in a robust way, especially in a codebase where you get to work on as a team such as lack of sum types and generics support (luckily, generics support seems to be on its way). One of these limitations I have come across is not having any built-in constructor support. I stumbled upon this limitation while learning Go, but I was mostly being open-minded. After seeing a few of the problems which lack of constructors caused, I can see the value of constructors to be adopted in most Go codebases. In this post, I will share a solution that worked for our team, and the advantages of adopting such solution. I must give credit to John Arundel. Thanks to the discussion we have had on Twitter, I am able to express a solution to this problem here which is based on what John made me aware of first. Now, when I say constructors in the title of the post here, I must confess that it’s a bit of a overstatement because I don’t see a way of having pure object constructors like we have with C# or Java in Go without changes in the language itself. However, we can work around the lack of constructors in Go by leveraging some other aspects of the language such as package scoping and interfaces and essentially adopt the factory method pattern. Let’s first touch on these two aspects of Go, and see how we can use them to our advantage to make our code more robust and protect against unexpected consumptions in the feature. Package Scoping Go doesn’t have access modifiers such as private, internal or public per se. However, you can influence whether a type should be internal to a package or should be exposed through naming in Go respectively by “unexporting” or “exporting” them. When your type is named by starting with a lowercase letter, it will only be available within the package itself. This rule also applies to the functions, and members of the types such as fields and methods. For example, the following code sample does not compile: singers/jazzsinger.go file: package singers type jazzSinger struct { } func (jazzSinger) Sing() string { return "Des yeux qui font baisser les miens" } main.go file: package main import ( "fmt" "github.com/tugberkugurlu/go-package-scope/singers" ) func main() { s := singers.jazzSinger{} fmt.Println(s.Sing()) } If we were to run this code, we would get the following error: ➜ go-package-scope go run main.go # command-line-arguments ./main.go:9:7: cannot refer to unexported name singers.jazzSinger ./main.go:9:7: undefined: singers.jazzSinger This sort of demonstrates how package scoping works in Go. You can learn more about packages in Go from Uday’s great article on this topic but this should be enough for us to get going for our example. Interfaces Let’s now look at interfaces in Go, which act very similar to what you would expect them to be. However, the way you “implement” (in Go “satisfy”) interfaces is very different to how you would do in C#, Java or TypeScript. The main difference is that you don’t explicitly declare that a struct implements an interface in Go. A struct is considered to be satisfying an interface by the compiler as long as it provides all the methods within it with matching signatures, or in the Go terminology, as long as the “method set” of the type can satisfy the interface requirements. Let’s look at the following example: package main import ( "fmt" ) type Singer interface { Sing() string } type jazzSinger struct { } func (jazzSinger) Sing() string { return "Des yeux qui font baisser les miens" } func main() { s := jazzSinger{} singToConsole(s) } func singToConsole(singer Singer) { fmt.Println(singer.Sing()) } This code happily executes. Notice that jazzSinger struct doesn’t say anything about implementing the Singer interface. This is what’s called structural typing, as opposed to nominal typing like one of C#’s characteristics (see the diff here). We can understand from this that Go has a way to abstract away the implementation and this fact will hugely help us when it comes to work around the lack of constructors in Go. Bringing All These Together These two aspects of the language can be brought together to allow us to hide the implementation from the contract by only exposing what we need. The challenge here is to be able to provide a way to construct the implementation. Fortunately, there is a workaround for this in Go: we can define an exported function within the package, which has access to the internal implementation, but also exposes it through the interface, as shown in the example below: package singers type Singer interface { Sing() string } type jazzSinger struct { } func (jazzSinger) Sing() string { return "Des yeux qui font baisser les miens" } func NewJazzSinger() Singer { return jazzSinger{} } NewJazzSinger function here can be accessed by the package consumer but jazzSinger struct is still hidden. package main import ( "fmt" "github.com/tugberkugurlu/go-package-scope/singers" ) func main() { s := singers.NewJazzSinger() singToConsole(s) } func singToConsole(singer singers.Singer) { fmt.Println(singer.Sing()) } Why is this good and how does this make our code more reliable? Let’s go over the main advantages of this technique, and how they make our code more reliable. Changes in the struct’s fields would make our code fail at compile time, rather than runtime Unlike other languages (such as TypeScript), Go doesn’t have a way to enforce assigning fields directly (omitted fields default to the zero value, which may not always be what you want) - so the compiler would not help us here - we would need to track all updates to the struct’s fields manually, which is tedious and error prone (specially in large codebases). Best case scenario, the code would be well tested and the unit tests would break. Worst case scenario, the code would blow up at Runtime, which would require a rollback of this release. To make matters worse, your application could be happily working without any crashes, but the its behaviour could be wrong due to the way the implementation might work. This one is the hardest and potentially most harmful bug to catch as it could have a larger impact on your efforts and the outcome you wanted to achieve in the first place. Let’s imagine our jazzSinger would start getting lyrics from an external resource. You would structure this by providing an interface and allowing jazzSinger to call into that, which would look like the following snippet/example: package singers // Lyrics type LyricsProvider interface { GetRandom() string } type jazzLyricsProvider struct { } func (jazzLyricsProvider) GetRandom() string { return "Des yeux qui font baisser les miens" } func NewJazzLyricsProvider() LyricsProvider { return jazzLyricsProvider{} } // Singer type Singer interface { Sing() string } type jazzSinger struct { lyrics LyricsProvider } func (js jazzSinger) Sing() string { return js.lyrics.GetRandom() } func NewJazzSinger(lyrics LyricsProvider) Singer { return jazzSinger{ lyrics: lyrics, } } If we were to build our application directly without modifying the main package (which is the consumer of the singers package), we would see the following error: ➜ go-package-scope go build main.go # command-line-arguments ./main.go:9:28: not enough arguments in call to singers.NewJazzSinger have () want (singers.LyricsProvider) We wouldn’t get this level of feedback if we were to initialize the struct directly. What we would get instead is a failure: ➜ go-package-scope go run main.go panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1091512] goroutine 1 [running]: github.com/tugberkugurlu/go-package-scope/singers.JazzSinger.Sing(0x0, 0x0, 0x1010095, 0xc00000e1e0) /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/singers/jazzsinger.go:31 +0x22 main.singToConsole(0x10d7520, 0xc00000e1e0) /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/main.go:14 +0x35 main.main() /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/main.go:10 +0x57 exit status 2 Allows you to provide parameter validation as early as possible Enforcing parameter validation also allows the consumer to explicitly act on potential errors. I must be honest here, we mostly need this level of validation due to Go’s inability to enforce nil pointer check before accessing the value, which is provided in languages like TypeScript. My post on TypeScript demonstrates what I mean by this. However, there are genuinely other cases where a compiler cannot guard against your own business logic. In our example above, we can still make our code compile successfully with the constructor implementation but get a runtime error: package main import ( "fmt" "github.com/tugberkugurlu/go-package-scope/singers" ) func main() { s := singers.NewJazzSinger(nil) singToConsole(s) } func singToConsole(singer singers.Singer) { fmt.Println(singer.Sing()) } When we run we see the error below - even though the code compiled successfully: ➜ go-package-scope go build main.go ➜ go-package-scope ./main panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1091512] goroutine 1 [running]: github.com/tugberkugurlu/go-package-scope/singers.jazzSinger.Sing(0x0, 0x0, 0x1010095, 0xc00008e030) /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/singers/jazzsinger.go:31 +0x22 main.singToConsole(0x10d75a0, 0xc00008e030) /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/main.go:14 +0x35 main.main() /Users/tugberkugurlu/go/src/github.com/tugberkugurlu/go-package-scope/main.go:10 +0x5c There isn’t a solution available in Go as far as I am aware which would allow us to fail for these cases during compilation. However, thanks to the dedicated constructor for this object, we can explicitly signal potential construction errors by returning multiple values from the function call: func NewJazzSinger(lyrics LyricsProvider) (Singer, error) { if lyrics == nil { return nil, errors.New("lyrics cannot be nil") } return jazzSinger{ lyrics: lyrics, }, nil } At the time of consumption, it becomes very explicit to deal with returned result: s, err := singers.NewJazzSinger(nil) if err != nil { log.Fatal(err) } // ... Allows you to control the flow of your implementation The code below is a simplified and intended-use scenario of an interesting bug we had in production a while ago: package main import ( "fmt" ) type JazzSinger struct { count int } func (j *JazzSinger) Sing() string { j.count++ return "Des yeux qui font baisser les miens" } func (j *JazzSinger) Count() int { return j.count } func main() { s := &JazzSinger{} singToConsole(s) fmt.Println(s.Count()) singToConsole(s) fmt.Println(s.Count()) } func singToConsole(singer *JazzSinger) { fmt.Println(singer.Sing()) } This code works as expected: the singer sings, and the count is incremented. All great! Des yeux qui font baisser les miens 1 Des yeux qui font baisser les miens 2 This works because our method signature on the JazzSinger struct accepts a pointer to an instance of JazzSinger which means that the count will be incremented as expected even if the type is passed around, and that’s what’s happening with the above scenario. However, can we guess what will happen if we change our usage as below: func main() { s := JazzSinger{} singToConsole(s) fmt.Println(s.Count()) singToConsole(s) fmt.Println(s.Count()) } func singToConsole(singer JazzSinger) { fmt.Println(singer.Sing()) } My first guess was that compiler would fail here, and this is a perfectly reasonable assumption to make since we are not passing a pointer to Sing method call. If you made the same assumption as I did, you would be wrong. This compiles perfectly but it won’t work as expected: Des yeux qui font baisser les miens 0 Des yeux qui font baisser les miens 0 The worst part is that this would actually work if we were to get rid of the singToConsole function and embed its implementation: func main() { s := JazzSinger{} s.Sing() fmt.Println(s.Count()) s.Sing() fmt.Println(s.Count()) } Des yeux qui font baisser les miens 1 Des yeux qui font baisser les miens 2 This is the exact reason why your tests will pass even if they have the wrong usage! package main import ( "github.com/deliveroo/assert-go" "testing" ) func TestJazzSinger(t *testing.T) { t.Run("count increments as expected", func(t *testing.T) { singer := JazzSinger{} singer.Sing() singer.Sing() assert.Equal(t, singer.Count(), 2) }) } ➜ jazz-singer git:(master) ✗ go test -v === RUN TestJazzSinger === RUN TestJazzSinger/count_increments_as_expected --- PASS: TestJazzSinger (0.00s) --- PASS: TestJazzSinger/count_increments_as_expected (0.00s) PASS ok github.com/tugberkugurlu/algos-go/jazz-singer 0.549s After a bit more digging, it turned out that this is actually the intended behavior of Go, and it’s even documented in its spec: A method call x.m() is valid if the method set of (the type of) x contains m and the argument list can be assigned to the parameter list of m. If x is addressable and &x’s method set contains m, x.m() is shorthand for (&x).m(). I am still unsure why this could be useful, but it is what it is, and it’s so easy to make the same mistake since you can ensure how the consumer will flow the type as the creator of the type if it can be constructed freely. In fact, the decision of how the type should be flowed should be the decision of the owner (i.e. its package) of the type, not the consumer, and I have never found a case where I needed to flow a type both as a pointer or value. Languages like C# puts the burden of this choice onto the author of the type by forcing them to choose between a class and struct. In Go, you can make this safer through the use of the constructor pattern as well, by ensuring that your struct is not allowed to be constructed directly and you controlling how the initialized value should be flowed. package singers type Singer interface { Sing() string Count() int } type jazzSinger struct { count int } func (j *jazzSinger) Sing() string { j.count++ return "Des yeux qui font baisser les miens" } func (j *jazzSinger) Count() string { return j.count } func NewJazzSinger() Singer { return &jazzSinger{} } The consumer of this type needs to construct it through NewJazzSinger function here, which is making the decision to flow the type as a pointer because it needs to be able to mutate its own state as it’s being used. package main import ( "fmt" "github.com/tugberkugurlu/go-package-scope/singers" ) func main() { s := singers.NewJazzSinger() singToConsole(s) fmt.Println(s.Count()) singToConsole(s) fmt.Println(s.Count()) } func singToConsole(singer singers.Singer) { fmt.Println(singer.Sing()) } Drawbacks of using Interfaces Returning an Interface from your constructor function allows you to encapsulate the implementation details fully, and disallow uncontrolled changes in your state. However, this comes with its own trade-off that since as the consumer of a specific package/library, you need to do further digging to understand the intent, and whether the underlying type is being returned as a value or as a pointer. This can be significant for some use cases where, for example, you want to reduce the pressure from the garbage collector by reducing the pointer usage. It’s actually possible to eliminate to use of interface here, and just returning the raw struct. Howevever, that also comes with its own drawback: If you return the exported struct, the consumer of your package can initialize a new struct without using the constructor, e.g. JazzSinger{}. Allowing the consumers to bypass constructor usage will come with its own problems as we have seen in this post. If you return an unexported struct, you will make it hard for the consumers of your package to accumulate the results from the constructor. This Go Playground example shows where this might be critical. This can be worked around by owning the interface that matches at least the partial signature of the unexported struct at the consumption level. This Go Playground example shows how to achieve that. In any case, it’s best to be informed about this drawback, and go with the right option which will fit into your use case. Conclusion Modelling your domain is hard and it’s even harder if you have rich models which hold a mutable state along with explicit behaviours. Go programming language may may not give you all the tools to directly model your domain in a rich way as some other programming languages provide. However, it’s still possible to make it work for some cases by adopting some usage principles. Constructor pattern is one of them, and it has been one of the most useful ones for me since I can confidently encapsulate the initialisation logic of my model by enforcing state validity within a package scope.
Jun 16, 2020
Using AWS EC2 and ECS to host hundreds of services
One of my goals of moving internally to the Production Engineering team was to help demystify the concepts that are commonplace within our Platform teams. My first internal blog post to do this was to share how we use EC2 (AWS Elastic Compute Cloud) and ECS (AWS Elastic Container Service) to host the hundreds of services that our Software Engineers build and improve every day. What is an EC2 host? “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.” Amazon’s description of EC2 I would say this; an EC2 host is a server. More simply, it is a computer. It is (in most cases) not an actual physical server in a rack, and Amazon abstracts that detail away from us, but I find I get my head around the concept easier by thinking of them as physical machines anyway. The machines we generally use have 16 vCPUs and 64 GiB of Memory (RAM). It comes preinstalled with the software required to make it a usable computer; like an operating system (you can just assume Linux for now - others are available though), so it can be booted up and can run processes - more on that later… What do we use EC2 hosts for? A few different uses, but the most common use is in an ECS Cluster, a grouping of EC2 machines used as a home for ECS Tasks - these are the Dockerized containers of our applications that are running with a command specified by the engineer in a config file. ECS? What’s that, and how is it related to EC2? ECS is AWS’s Elastic Container Service. It is an orchestration service that makes sure the correct number of each service is running in the environment. What it is actually running are the Docker containers that our continuous deployment provider built when a PR was last merged to master. When an engineer tells Hopper, our application release manager, to first scale one of their app’s services up from 0 to 1 task, Hopper makes a call to ECS to ask it to make sure that at all times there is one healthy instance of their Docker container running on any one of the EC2 hosts. This is the desired number of tasks - if the number of running tasks is less than this, ECS will start more containers to reach the desired number, if there are more than desired, ECS will safely terminate running containers to reach the desired number. Where does ECS start running this one container I’ve asked for? This takes us back to our cluster of EC2 machines. ECS will find an EC2 machine in the cluster that has enough spare capacity on it to hold and run your task (i.e. it has enough spare reserved CPU and memory - which is specified in the config file). There are some other rules in place regarding which Availability Zone your task is running in (we don’t want all your eggs in one basket), but for the most part, we leave it to ECS to decide. What happens if the cluster is full? We are constantly monitoring the ECS cluster, and autoscale EC2 instances based on how much spare capacity there is. If there’s not enough spare capacity to immediately run another 40 large docker containers, we bump up the desired count of EC2 instances in the cluster, and EC2 spins up new machines (the number of machines we start up depends on how much below approximately 40 large container capacity we are). New EC2 instances can take a few minutes before they’re ready to be used by ECS, so we need to have a buffer to deal with unexpected spikes in demand. How do we change or upgrade the machine image used? Circling back to the software that is preinstalled on these EC2 servers. When booted up, an Amazon Machine Image (AMI) is used, which has some basic tools installed on it to make the machine usable. Amazon provides a base AMI which we have built upon, using Packer and Ansible, to create our own Amazon Linux-derived machine image. This, and some initialization scripts, give all our running ECS tasks access to things that all Deliveroo’s services will need, such as software (like the Datadog agent which sends metrics to Datadog, and AWS Inspector, AWS’s automated security assessment service), roles, security policies, and environment variables that we need to apply to the containers. The process of rolling out a new machine image when an update is available, or when we make changes to our custom machine image, is not as straightforward as I’m used to as an application developer (I have a new-found appreciation for release management software). Only newly created EC2 machines will be built using this new image, and so the process of rolling out is one of the following on each of our AWS environments (sandbox, staging, production): Disabling some of the cluster autoscaling rules, as we only want EC2 instances using the old image being terminated when the cluster gets too big. Slowly scaling up the number of desired EC2 instances using the new AMI and observing whether the change looks to be applied correctly, or if there are issues occurring, or alerts triggering. Slowly reducing the desired number of old EC2 instances - terminated instances will send a message to ECS to safely end all the tasks being run on the instance. Without doing this, very few new services will actually be placed on the new EC2 instances to test the changes in an incremental fashion. Once the cluster is fully on the new EC2 instances, adjust and re-enable the autoscaling rules so that the old AMI is no longer used, and we continue to autoscale instances using only the new AMI. Repeat until fully rolled out, on all environments. We use an A/B system to deploy - the old AMI and configurations remained as the B option, while any changes are only applied to the A track. On the first attempt we noticed some issues with the new machine image after starting a relatively small number of EC2 machines; it was as simple as scaling B back up to an appropriate level, and A down to 0. As disappointing as it was to fail the first time, I learnt so much more about the process by having to undo it halfway through than I would have done if it had gone perfectly.
Jun 16, 2020
Using AWS EC2 and ECS to host hundreds of services
One of my goals of moving internally to the Production Engineering team was to help demystify the concepts that are commonplace within our Platform teams. My first internal blog post to do this was to share how we use EC2 (AWS Elastic Compute Cloud) and ECS (AWS Elastic Container Service) to host the hundreds of services that our Software Engineers build and improve every day. What is an EC2 host? “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.” Amazon’s description of EC2 I would say this; an EC2 host is a server. More simply, it is a computer. It is (in most cases) not an actual physical server in a rack, and Amazon abstracts that detail away from us, but I find I get my head around the concept easier by thinking of them as physical machines anyway. The machines we generally use have 16 vCPUs and 64 GiB of Memory (RAM). It comes preinstalled with the software required to make it a usable computer; like an operating system (you can just assume Linux for now - others are available though), so it can be booted up and can run processes - more on that later… What do we use EC2 hosts for? A few different uses, but the most common use is in an ECS Cluster, a grouping of EC2 machines used as a home for ECS Tasks - these are the Dockerized containers of our applications that are running with a command specified by the engineer in a config file. ECS? What’s that, and how is it related to EC2? ECS is AWS’s Elastic Container Service. It is an orchestration service that makes sure the correct number of each service is running in the environment. What it is actually running are the Docker containers that our continuous deployment provider built when a PR was last merged to master. When an engineer tells Hopper, our application release manager, to first scale one of their app’s services up from 0 to 1 task, Hopper makes a call to ECS to ask it to make sure that at all times there is one healthy instance of their Docker container running on any one of the EC2 hosts. This is the desired number of tasks - if the number of running tasks is less than this, ECS will start more containers to reach the desired number, if there are more than desired, ECS will safely terminate running containers to reach the desired number. Where does ECS start running this one container I’ve asked for? This takes us back to our cluster of EC2 machines. ECS will find an EC2 machine in the cluster that has enough spare capacity on it to hold and run your task (i.e. it has enough spare reserved CPU and memory - which is specified in the config file). There are some other rules in place regarding which Availability Zone your task is running in (we don’t want all your eggs in one basket), but for the most part, we leave it to ECS to decide. What happens if the cluster is full? We are constantly monitoring the ECS cluster, and autoscale EC2 instances based on how much spare capacity there is. If there’s not enough spare capacity to immediately run another 40 large docker containers, we bump up the desired count of EC2 instances in the cluster, and EC2 spins up new machines (the number of machines we start up depends on how much below approximately 40 large container capacity we are). New EC2 instances can take a few minutes before they’re ready to be used by ECS, so we need to have a buffer to deal with unexpected spikes in demand. How do we change or upgrade the machine image used? Circling back to the software that is preinstalled on these EC2 servers. When booted up, an Amazon Machine Image (AMI) is used, which has some basic tools installed on it to make the machine usable. Amazon provides a base AMI which we have built upon, using Packer and Ansible, to create our own Amazon Linux-derived machine image. This, and some initialization scripts, give all our running ECS tasks access to things that all Deliveroo’s services will need, such as software (like the Datadog agent which sends metrics to Datadog, and AWS Inspector, AWS’s automated security assessment service), roles, security policies, and environment variables that we need to apply to the containers. The process of rolling out a new machine image when an update is available, or when we make changes to our custom machine image, is not as straightforward as I’m used to as an application developer (I have a new-found appreciation for release management software). Only newly created EC2 machines will be built using this new image, and so the process of rolling out is one of the following on each of our AWS environments (sandbox, staging, production): Disabling some of the cluster autoscaling rules, as we only want EC2 instances using the old image being terminated when the cluster gets too big. Slowly scaling up the number of desired EC2 instances using the new AMI and observing whether the change looks to be applied correctly, or if there are issues occurring, or alerts triggering. Slowly reducing the desired number of old EC2 instances - terminated instances will send a message to ECS to safely end all the tasks being run on the instance. Without doing this, very few new services will actually be placed on the new EC2 instances to test the changes in an incremental fashion. Once the cluster is fully on the new EC2 instances, adjust and re-enable the autoscaling rules so that the old AMI is no longer used, and we continue to autoscale instances using only the new AMI. Repeat until fully rolled out, on all environments. We use an A/B system to deploy - the old AMI and configurations remained as the B option, while any changes are only applied to the A track. On the first attempt we noticed some issues with the new machine image after starting a relatively small number of EC2 machines; it was as simple as scaling B back up to an appropriate level, and A down to 0. As disappointing as it was to fail the first time, I learnt so much more about the process by having to undo it halfway through than I would have done if it had gone perfectly.
Jan 02, 2020
CloudFormation To Terraform
For those starting with either Terraform or CloudFormation this guide is a good way to understand the differences between the two. I found myself a little bit stuck because I needed to find/create code (in this case) that would help me in Benchmarking our compliance status in AWS. I found a solution in CloudFormation, so I wondered if there was some sort of translator tool (there wasn’t), and if not, how and where would I start translating this code? Would it be worth me building it from scratch in Terraform? How to convert CloudFormation (CF) to Terraform (TF): CIS Foundations Quickstart First lets state the differences and how each syntax is built: Terraform Terraform language uses HCL (Hashicorp Configuration Language). Terraform code is built around 2 key syntax constructs: Arguments: Arguments assigns a value to a particular name: image_id = "blabla" Blocks: A block is a container for other content resource "aws_instance" "example" { ami = "abc123" network_interface { # ... } } Terraform consists of modules, which is really up to the builder on what it does. Each module has blocks and along with the configuration, it tells terraform how/when to use/build that module. Configuration files in Terraform are written in JSON. CloudFormation CloudFormation is all about templates. If you want to build a configuration for an application or service in AWS, in CF, you would create a template, these templates will quickly provision the services or applications (called stacks) needed. The most important top-level properties of a CloudFormation template are: Resources: This would be where we define the services used in the stack. For example, we could define an EC2 instance, its type, security group etc. EC2Instance: Type: AWS::EC2::Instance Properties: InstanceType: Ref: InstanceType SecurityGroups: - Ref: InstanceSecurityGroup Parameters: If we define an instance, with its type, this is where that “parameter type” would be passed in: Parameters: InstanceType: Description: WebServer EC2 instance type Type: String Default: t2.small Configuration files for CF are written either in YAML or JSON. Converting CF to TF In this document, I’ll take you through the steps I went through on how to convert CF to TF. In particular, a recent project I worked on. In case you haven’t heard about it, CIS is the Center for Internet Security, and they provide cyber security standards and best practices. Recently, AWS launched a new service called AWS Security Hub, which analyses security findings from various supported AWS and third-party products. Security hub supports the CIS AWS Foundations Benchmark, (read more here) which, quoting CIS is “An objective, consensus-driven security guideline for the AWS Cloud Providers”. To jump straight into it, AWS Security Architects partnered up with Accenture and created a CIS-Foundations Quickstart written in CloudFormation. So, after looking around, realised there wasn’t any versions written in Terraform, and also no guides on how to translate it. Or automated translation tools for the matter (future work? hit me up for a collab) I decided to do it manually, as I felt this was a bit of a sensitive project to be testing automated tools on. But fear not, I did not do it as manually as you think. Simplicity above everything! Part 1: Understand the structure, state the stack Lets take a look at how the CloudFormation CIS Benchmark Quickstart works. The stack can be described as follows: Cloudtrail AWS Config S3 Templates are the following: Pre-requisites template: makes sure CloudTrail, config and S3 are created or exist and meet the preconditions for CIS Benchmarking: Config must have an active recorder running. CloudTrail must be delivering logs to CloudWatch Logs Config Setup template: sets the configurations needed for AWS config CloudTrail-setup template: sets the configurations needed for CloudTrail CIS-benchmark template: this is the tricky one, it contains all 42 objectives the account should meet to be CIS foundations compliant. Main template: this is the main template, and it nests the stacks created from the previous templates so it can deploy the CIS AWS Foundations benchmark. Part 2: design TF Now that we stated how this CF project works, lets see how we can transform them into the likes of Terraform. The templates can be transformed into modules. Pre-requisites can be part of the config and Circle CI checks (we will take a look at that in the end) Main template will be the main.tf, contains all the callable modules. Lets see how a CF template would look like: AWSTemplateFormatVersion: 2010-09-09 Description: (stuff) Metadata Labels: (Stuff) Parameters: (Stuff) Conditions: (more Stuff) Resource: (This is where all the cheesy stuff happens) Now, lets see how we can use that to “translate” into Terraform. Part 3: translation Now, apart from tedious, translating line by line, especially in a big project, is a bit of science fiction (for me). So I dug around: 1) Terraform accepts CF stack templates: By Stating Resource: aws_cloudformation_stack_set, you can manage a CloudFormation stack set, so this functionality allows you to deploy CloudFormation templates. It only accepts JSON templates. Possible challenge: templates built in YAML instead of JSON No problem! I had this myself, after a bit of googling, there is actually a tool called cfn-flip explicitly for the translation of YAML to JSON in CF: So for example, if you want to create the template in json: $ cfn-flip main.template > main.json or, just copy the output: $ cfn-flip main.template | pbcopy 2) But, what if it’s a giant template? This is my case too, the cis-benchmark template is quite big. Luckily for us again, you can reference the json template, by uploading it to a S3 bucket. It would look like this: resource "aws_cloudformation_stack" "cis-benchmark" { name = "cis-benchmark-stack" template_url = "https://cis-compliance-json.s3-eu-west-1.amazonaws.com/cis-benchmark.json" } Note: never reference config files and templates that have hardcoded variables (also never hard code sensitive data ) that are hosted publicly. In my case think of that template as skeletal, it doesn’t have any sort of compromising info. And done, we just created a part of the module in just 3 lines of code! Challenges 3) Nested Stacks In our case, the Quickstart uses nested stacks. Now aws_cloudformation_stack terraform functionality doesn’t have a “nested stacks” option. But creating the resources in the same module works fine. Parameters If you need to pass Parameters, you can do it as you would normally do, state the vars in the resource where you create the stack and should be good to go too! Example code: (This is an extract of the module) #root module resource "aws_cloudformation_stack" "pre-requisites" { name = "CIS-Compliance-Benchmark-PreRequisitesForCISBenchmark" template_url = "https://{bucket-name}.s3-(region-here).amazonaws.com/cis-pre-requisites.json" parameters = { QSS3BucketName = "${var.QSS3BucketName}" QSS3KeyPrefix = "${var.QSS3KeyPrefix}" ConfigureConfig = "${var.ConfigureConfig}" ConfigureCloudtrail = "${var.ConfigureCloudtrail}" } } resource "aws_cloudformation_stack" "cloudtrail-setup" { name = "CIS-Compliance-Benchmark-cloudtrail-stack" template_url = "https://{bucket-name-here}.s3-(region-here).amazonaws.com/cis-cloudtrail-setup.json" capabilities = ["CAPABILITY_IAM"] } [...] 4) Done! Now you can simply run and manage your stacks using Terraform. I suggest to always be careful with sensitive data and parameters and follow best practices. You can read more about it here Conclusion When it comes to features, CF and TF are not equivalent. It is not possible to express what CF is able to deploy in TF. Which is why I aimed at this solution, translating line by line would be very tedious, so if that is your case i’d suggest rewriting the entire module in TF. However writing a translator would be complex but very useful, still would have to figure out how it would work when CF uses intrinsic functions (please contact me for ideas!) but i’d guess that’ll be for future work. I hope this quick workaround helped you out!
Dec 10, 2019
Increasing TestFlight Adoption With the App Store Connect API
At Deliveroo, we rely on TestFlight to ensure our iOS app ships to the App Store with as few issues as possible. We don’t have manual testers and since we operate in 13 countries and support a number of country specific features and payment options, it is important to get as many people as possible to install the latest beta to cover most use cases. To do this we prompt employees in app with a modal screen to invite them to install the latest beta. The prompt is shown when the app detects it is out of date by comparing the app version with the latest version available on TestFlight. Until recently configuring the latest TestFlight version required a manual step during the release process. The developer in charge of releasing the app that week needed to update a configuration tool to set the latest available TestFlight version. We’ve now automated that step by using the App Store Connect API. Fetching the TestFlight Version From the App Store Connect API The App Store Connect API is a REST API that can be used to access various areas of App Store Connect, such as user management, TestFlight, etc. This API is still relatively new: it was announced at WWDC 2018 and released in November 2018. One use case that was particularly interesting to us was fetching the build number of the latest TestFlight public beta from App Store Connect. We came up with the following query to fetch the info we needed: $ curl -g “https://api.appstoreconnect.apple.com/v1/builds?limit=10&sort=-version&filter[app]=<apple_app_id>&include=buildBetaDetail” --Header “Authorization: Bearer <generated JWT token>” In the query above, replace <apple_app_id> with your own app ID. The JWT token can be generated with the script below, taken from the WWDC video presentation about the App Store Connect API. The curl request above returns some JSON that contains amongst other things, the build number of the latest ten betas. It is then simple enough to look for the highest version number that has the following state: “externalBuildState”: “IN_BETA_TESTING”. You’ll have to do a bit of processing: the response follows the JSON:API format and contains a data object which contains the version number and an included object which contains the build state. You can map the version number to a build state by joining on the opaque IDs (see sample code below). Authenticating with App Store Connect API Authentication is done with JSON Web Token which can be created with a private key created on App Store Connect, in the Users and Keys section. The key only needs Developer access, not Admin. As you must not embed a private key within an app we built a service to query the API. At the last company hack day we set out to build an AWS lambda (to avoid adding more code to our backend service) to fetch the version information. The lambda takes around 2 seconds to run, most of this time is spent waiting for a response from the App Store Connect API. This is a long delay and means the configuration endpoint can’t query the lambda directly, even occasionally or it would increase latency for some requests. A simple solution has been to have a background worker query the lambda periodically and cache the result in Redis. Here is an overview of how the final system is set up: Try It Yourself Go to App Store Connect Users and Access section Create an API key, make a note of the key ID and issuer ID. Download the key and save it securely, you will not be able to download it again Copy the script below in a file named testflight.rb for example and place it in the same folder as your private key. Set the values for ISSUER_ID, KEY_ID and APP_ID. Run the script: $ ruby testflight.rb. This should print something like Latest TestFlight beta version: 21566 require 'net/https' require 'uri' require 'json' require 'base64' require 'jwt' ISSUER_ID = "<replace-with-issuer-id>" KEY_ID = "<replace-with-key-id>" APP_ID = "<replace-with-app-id>" def generate_token private_key = OpenSSL::PKey.read(File.read("AuthKey_#{KEY_ID}.p8")) JWT.encode( { iss: ISSUER_ID, exp: Time.now.to_i + 20 * 60, aud: "appstoreconnect-v1" }, private_key, "ES256", header_fields={ kid: KEY_ID } ) end def fetch_latest_version uri = URI.parse("https://api.appstoreconnect.apple.com/v1/builds?limit=10&sort=-version&filter[app]=#{APP_ID}&include=buildBetaDetail") header = {"Authorization": "Bearer #{generate_token}"} response = Net::HTTP.start(uri.host, uri.port, :use_ssl => true) do |http| request = Net::HTTP::Get.new(uri.request_uri, header) http.request(request) end parsed = JSON.parse(response.body) buildIDMap = parsed["data"].map { |buildInfo| [buildInfo["id"], buildInfo["attributes"]["version"]] }.to_h betaDetailsMap = parsed["included"].map { |betaDetails| [betaDetails["id"], betaDetails["attributes"]["externalBuildState"]] }.to_h versionsInBetaTest = buildIDMap.select { |id| betaDetailsMap[id] == "IN_BETA_TESTING" }.map { |id, version| version } versionsInBetaTest.sort.last end puts "Latest TestFlight beta version: #{fetch_latest_version}" Conclusion With the App Store Connect API we’ve been able to facilitate TestFlight onboarding for all employees. The number of food delivery orders placed from a TestFlight build has more than doubled and our confidence to release a new version has increased. From my perspective this has been an interesting project which introduced me to AWS lambdas and our Ruby on Rails stack. We’re always hiring and if you’re interested in both iOS and backend development, you should consider joining our team.
Oct 15, 2019
Meet the Grads
Interested in joining Deliveroo? Head to the bottom of this post for details on our 2020 Grad Hiring. Why Deliveroo? Hi, I’m Ryan, I recently graduated after studying Computer Science at the University of Plymouth and I’ve joined the Consumer Tech team as a backend engineer. During my time as a student I was very familiar with Deliveroo (mostly on Saturday mornings!) and this meant Deliveroo felt like the perfect place for me after Uni. One of the benefits of joining a company like Deliveroo is that despite the size and scale of the company, there’s still loads of challenging problems to solve. This really drew me to Deliveroo as it meant that there was ample opportunity for me to challenge myself by tackling new and interesting problems and I knew that I wouldn’t be doing boring or repetitive tasks. Another big motivator for me to join Deliveroo was the company culture. It was clear from the start that Deliveroo really valued its employees, and this showed when speaking with engineers and listening to how highly they spoke of Deliveroo. Application and Interview Process Hi, I’m Matt and I studied Computer Science at University College London and have joined Deliveroo as a Software Engineer in the Data Services team. In the team we build and operate software that can support the data intensive applications and scenarios that we have at Deliveroo. This includes using technologies such as Kafka and Kubernetes and partnering across the company to collaborate and consult on data-intensive applications. Having known a few people already working at Deliveroo I was really keen to apply for a graduate role and so kept an eye out for any openings on the Deliveroo careers site. As soon as I saw an advertisement for the 2019 Graduate role I was quick to send off my application, looking forward to any next steps. I soon received an email from one of the recruiters inviting me for a phone screen which was just a short conversation in which I could highlight any areas of interest and allowed the recruiter to get to know me a little better. A few days later I was invited for the first round of interview! This was an online coding exercise with two other software engineers who work at Deliveroo. It was really enjoyable - both the engineers were super friendly and it allowed me to get a better insight into the work that goes on here. A few days later I received an email from one of the recruiters who told me that I had passed and was invited to complete a take home task which consisted of solving a problem using a programming language of my choice. Once I had completed the task I sent it off to get reviewed by a few of the engineers who then invited me for an onsite interview in the London HQ. The onsite was scheduled to last a few hours and consisted of two interviews - one technical and one behavioural. It also gave me an opportunity to get a tour of the office - and of course the famous Roo Pitch. The technical interview consisted of going over the take home task and working with two other engineers to make a few changes. It felt very relaxed - my interviewers were really helpful, giving me any advice when needed. The behavioral interview was with one of the Engineering Managers and was an opportunity to talk over my experiences working as part of a team and how I could be a good fit for the company. All in all the day was really enjoyable and allowed me to get a good insight into what life at Deliveroo would be like. About a week later I received a phone call from the recruiter who informed me that I had passed the final round and they would be extending an offer to me! I was super excited to get the news and knew straight away that I wanted to accept the offer and was looking forward to starting a few weeks later. Onboarding Hi, I’m Aleena, I studied Computer Science at Bristol University and have joined as a frontend engineer on the Consumer Tech team, working on the web and mobile web experience. As someone with previous experience in startups, I was keen to join somewhere with the culture and pace of a startup, but where I could also work with more frontend developers on a wider range of problems, so Deliveroo fit the bill for me! Most of the tech grads joined in the same week, and so we had a week of grad-only onboarding before doing the usual tech onboarding with other new-joiners. I found that this was a great opportunity to connect with the other grads, and have some extra sessions that were focussed around general skills in working as an engineer. We were all assigned a “team buddy” - someone on our team who helped to introduce us to the team, set up our development environment and answer any technical or general questions. For me, this meant having someone who could pair with me in the first few weeks, and direct me towards tickets that would help me get familiar with different aspects of the architecture. I also paired with several other frontend engineers in my team, which helped me get an idea of the different features that my team were working on, and understand the thought processes of the engineers around me. As new grads, we were also assigned a “senior mentor” - someone on a different team as us who we could meet with periodically to discuss technical and non-technical things with. Having a person outside my immediate team meant it was easier to discuss general career goals, and my senior mentor set up a lot of meetings for me to meet people in different teams and roles than mine, which was a great way to understand the scope of the engineering team at Deliveroo. As a frontend engineer, I also meet weekly with all the other frontend engineers in the company, where we have a chance to discuss what we’re all working on, talk about any general frontend issues, and collaborate on new parts of the design system with members of the design ops team. Day-to-day I’m Hashim, a Software Engineer on the Consumer Tech team. I joined Deliveroo after completing my Computer Science degree from King’s College London, having worked as a Software Engineering Consultant during my placement year. While consulting has its benefits, I wanted to work for a company with a mission that I could believe in and contribute towards; being a foodie myself, Deliveroo was the perfect place. From just my first week I had a pretty good idea of how Deliveroo operates as well as its strategy to grow and expand. We even had a firm-wide meeting with the CEO, Will Shu. Seeing him explain the company plan and strategy really strengthened my belief that this is the right place for me. My team mentor has been super helpful since the day I started. She helped me to setup my laptop and get access to the services I needed, as well as introducing me to many people and giving me an overview on the office culture. After on-boarding I was given an overview of the systems we’re responsible for and how they worked. I split my time between self guided training on the stack (Ruby & Go) and pairing on small tickets to get familiar with the code-base. What I really liked is that we’re able to take responsibility whenever we’re ready to, rather than after a certain period of time. I had my first bit of code in production in my first week! I was pleasantly surprised about the level of autonomy we get here at Deliveroo - everyone has a say and ideas are welcome no matter what your title is. I’ve heard of graduate schemes where the grads are given boring or repetitive tasks that no one else wants to do. The great thing is that here it’s the opposite; my team told me to push back if this started to happen. 2020 Grad Hiring Interested in joining Deliveroo? Our 2020 grad hiring opens today! Click here for more information.

You may also be interested in Related companies

logo
Swiggy
HQ
Bengaluru, IN
Employees
11086  -9decrease
Swiggy is a company which offers an on-demand food delivery platform designed to provide food from neighborhood restaurants to the customers.
View company
logo
Thriver
HQ
Toronto, CA
Employees
14  -7decrease
Thriver (formerly known as Platterz) owns and operates an online meal ordering marketplace for ordering large and shareable meals for offices.
View company
logo
GrubHub
HQ
Chicago, US
Employees
2722  28increase
GrubHub provides an online and mobile platform for restaurant pick-up and delivery orders.
View company
logo
CMEOW
HQ
Richmond Hill, CA
Employees
11 
CMEOW is a company which provides consumer food services.
View company
When was Deliveroo founded?
Deliveroo was founded in 2012.
Who are Deliveroo key executives?
Deliveroo's key executives are Will Shu, Rohan Pradhan and Vince Darley.
How many employees does Deliveroo have?
Deliveroo has 6,238 employees.
What is Deliveroo revenue?
Latest Deliveroo annual revenue is £277.14 m.
What is Deliveroo revenue per employee?
Latest Deliveroo revenue per employee is £102.49 k.
Who are Deliveroo competitors?
Competitors of Deliveroo include Uber Eats, GrubHub and Caviar.
Where is Deliveroo headquarters?
Deliveroo headquarters is located at 1 Cousin Lane , London.
Where are Deliveroo offices?
Deliveroo has offices in London, Los Angeles, Balaclava, Brussel and 9 other locations
How many offices does Deliveroo have?
Deliveroo has 15 offices.

Craft real-time company insights

Learn about Craft real-time company insights

Receive alerts for 300+ data fields across thousands of companies

Footer menu