Tim is an AIOps product for SREs and DevOps to drastically reduce their cloud management overhead. This is achieved through a combination of real-time monitoring and alerting of cloud spend, reporting with drill-down on tags, services and resources, a SQL-queryable cost info database, and powerful anomaly detection.
You can register for Tim at https://www.taloflow.ai/register. Once created, you’ll be prompted to go through the integration process.
If you are integrating at the Master or Root Account level, please pick any of the following options:
If you are integrating a Sub Account, in addition to the above, please refer to the following docs so you can give us access to an S3 bucket with a filtered AWS Cost and Usage Report:
Cost reports need to run before platform is available
Once integrated, we first need to process a fresh set of your cost reports from AWS before handoff. These are usually produced by AWS every 6 to 24-hour period. Depending on the time of your integration, it may take until the next business day before your account is ready for you. Thanks for your patience!
Once a first set of cost reports has run, you will get the following:
- Email alerts on anomalies
- Link to access your monitoring dashboard
- Link to your custom reports
- Short survey to share your objectives with the tool
7 days of cost reports needed before functionality is available
If you don't have AWS Cost and Usage Reports turned on until you start using Tim, there is insufficient data available for us to offer reliable forecasts and anomaly detection. This will impact the dashboard UI and alerting. However, after approximately 7 days, enough cost history is available for these to flow normally and they will improve overtime. If you do have a longer history of reports, please make sure that you copy over the reports to the S3 bucket you gave Tim access to get the best results.
As a user of Tim, you have access to a Grafana dashboard. To login, go to https://tim.taloflow.ai Drill into your data how you want to by tags, service, etc. You can look at your spend velocity in 1-hour to weekly intervals, and virtually any perspective on cost you need is available to you.
Hover over the (i) icon on graphs to see the tooltip info
There is some important contextual information on the graphs and tables in the Grafana UI. You don't want to miss it. Most graphs and tables have a tooltip you can hover for an easy-to-access explanation of what is being shown.
||This is useful for some orgs who want a consolidated view of all the
||Easily select which accounts (e.g.: Sub Accounts within a Master/Root Account) you want to view or filter out.|
||Every service-code in AWS that you use and is available in your AWS Cost and Usage Report is available here, and you can filter accordingly.|
||You can view the time-series data in various time intervals, starting from hourly granularity.|
||You can filter out the anomalies by severity of the anomaly. Level 3 being most serious, and Level 1 being least serious.|
In the top-right controls, the important ones are:
Refreshbutton. Depending on the query being run, you may have to click on this to see the updated view on the data.
Time rangeselector. Here you can select which past historical period from the drop down, or a
Custom time range.
Pro Tip: The
Custom time range selector is useful if you are looking for the forecast of a specific period in the future.
Time range selector
There are numerous graphs and tables in the Grafana dashboard that use
Actuals are from the AWS Cost and Usage Report. In other words, these charges incurred on AWS in the period. The
Forecast numbers are produced by Tim. They are based off the AWS Cost and Usage Report history, activity on your account, and some machine learning models.
In the various graphs where a Forecast is presented, you can also see the
confidence limits (or confidence bands). Which shows you the range of expected spend in the Forecast for any given time interval.
Forecast vs. Actuals (AmazonS3)
The Actuals and Forecasts together help Tim determine the degree of an anomaly. In other words, how far off the Forecast (a.k.a.: the expected amount) the Actuals are will trigger an anomaly based on the variance from the expected.
These tickers are helpful to get a quick glimpse of your actual spend and forecasts by the typical periods used to plan. (e.g.: quarter, month, week)
Please note that the time zone of the graphs and tables in the UI are based on your browser, for the exception of the
By Period Tickers (described below), which are based on UTC.
We email you the following summaries and alerts.
- Cost-report summaries
- Daily summaries
- Weekly summaries
- Monthly summaries
- Service-based anomalies
Each email details the anomalies within the period, and shows you a breakdown of when in the day the anomalies occurred with hour-by-hour time stamps. The by-period emails, (e.g.: weekly summaries) will detail which AWS Services were most anomalous for the period.
AmazonS3 Spend Anomaly Email
You can elect to filter out specific emails (e.g.: cost-report summary emails) simply by replying "stop" to the email. This is also possible for emails that alert on specific AWS Services (e.g.: AmazonSNS).
Working with AWS Cost and Usage Reports is a painful experience, but there's a lot of useful data if you know how to get it. Tim auto-generates a cleaned-up cost report every time a new one comes in with pivots ready for quick analysis.
The reports are shared as Google Sheets in a secure Google Drive folder. You can copy these reports to whatever store you want. The Google Drive folder is shared after on-boarding.
Every time a new cost report comes in, new spreadsheets will generate within the hour and show up in your folder so you have the most updated information. To avoid clutter, the old ones will be placed automatically in an archive folder within the main folder.
Below are some of the pivots that are available by default. If you require other pivots, please email us at firstname.lastname@example.org and we can turn them on for you.
- Pivot by Service and Service operations with Day/Day amounts
- Pivot by Service and Service operations with Month/Month amounts
- Pivot by Tag Grouping (grouping includes Service and Service operations)
- Pivot on EC2 Type
Too many pivots
Having too many pivots on at the same time in the auto-generated reports can slow down Google Sheets. This obviously depends on how much data you have.
Pivot by Service and Service operations with Day/Day amounts
Pivot by Service and Service operations with Month/Month amounts
Pivot on EC2 cost by Type
Viewing by tags
If you use tags for resources within your organization, then all the same information in the pivots is available with your tagging as well.
The cost information that Tim has for you is fully queryable using SQL queries in AWS Athena. To get access to the database of cost data for your organization, a few extra steps are required. Please email our team at email@example.com and we will send you set up instructions accordingly and also send you a list of
Common Queries and
Advanced Queries to get you started.