Here we’ve provided content to help you. Hopefully, you’ll find what you’re looking for.
If not, please contact us.
What files does the service accept?
The service accepts Microsoft Access Databases (mdb and accdb formats) that are not password-protected up to 15MB in size.
What gets migrated into the Cloud?
Only the Database tables are migrated. No forms, macros or queries are migrated. The idea is to help you better manage your data.
What’s the point if you only migrate data?
The approach is to get your data into the Cloud where you can integrate with other data sources. So many MSAccess Databases work in isolation from the business and don’t have any idea what’s going on around them. To get a better understanding of the health of your organisation, it’s best to get a holistic view, particularly if the Database supports key Business functions.
You can still use Microsoft Access for the Business logic (i.e. forms, queries and macros) and use the migrated tables as the Data. This way you can phase your Cloud migration strategy and also not hit that pesky 2GB Microsoft Access limit.
What happens when I upload a Database?
The following events occur:
the file is queued for processing
only the data is extracted and
database tables are created according to the naming standards (check next question for details).
How are the tables named?
If a Database is named Customer_Master.accdb and has the following tables:
the following tables will be created:
Note that a separate Database isn’t created; the table names are prefixed with the name of the Database. Also spaces are replaced with ‘_’
As it’s a Public Beta, what should I be aware of?
Firstly, it’s free while we fine tune and get Public feedback:) The data is stored in an AWS Database that is Public, so please don’t send any sensitive data! Also the Database is recreated daily at 12am GMT and so any previous data will be wiped.
I’ve submitted a file. Now what?
Your data will be automatically loaded into an Amazon AWS Database. You can look at the Database tables directly (user: email@example.com pass: datazumii). You’ll also get 2 emails along the way:
Acknowledgement of your data and
Results of the data load process.
How many records does the service load?
The service is based on how big the Database is where the limit is currently 15MB.
What happens when…
someone else has loaded a Database with the same name?
If someone else has loaded a Database with the same name, then any new tables not previously loaded will be processed. Say the following events occur in sequence:
Piotr loads Sales_Master_2005.mdb with tables Sales_Transactions, Sales_Person and Products
Mary loads Sales_Master_2005.mdb with tables SalesTxns, SalesPersonnel and Items
Linus loads Sales_Master_2005.mdb with tables Transactions, Employees and ProductMaster
the following tables will be created from the above processing:
sales_master_2005_mdb_sales_transactions, sales_master_2005_mdb_sales_person, sales_master_2005_mdb_products
sales_master_2005_mdb_salestxns, sales_master_2005_mdb_salespersonnel, sales_master_2005_mdb_items
sales_master_2005_mdb_transactions, sales_master_2005_mdb_employees, sales_master_2005_mdb_productmaster
As long as no one else has the same Database Name/Table Name combination, your table will be loaded. If there is a conflict, either rename your Database or table (or both).
It’s a good idea to Check the Database (user: firstname.lastname@example.org pass: datazumii) to see if the table already exists.
someone else has loaded a table with the same name?
If in the previous example, Mary also had a table named Products and it:
had the same structure, the data will be appended to sales_master_2005_mdb_products
had a different structure, you’ll get an email saying something went wrong when loading the Database.
I reload the same Database table?
The data will be appended as long at it has the same structure and no one else has the same Database Name/Table Name combination. Note that you can check the Load_DT field to check when the data was loaded.
I try and load an empty table?
At the moment, we don’t process empty tables.
I’ve logged into the Database using Trevor.IO but don’t see my data. What should I do?
Click the Refresh tables metadata option from the Cog icon at the top right of the page. Also, ensure you received the second email letting you know of the data load results. If you’re still stuck, please contact us.
Also note that empty tables won’t be loaded.
What happens with tables or fields with special characters?
These are replaced with ‘_’. Examples:
Table Name: Zip/Post Code becomes zip_post_code
Field Name: Sales Person Name becomes sales_person_name
What happens with tables or fields with special names?
Databases have special words that sometimes can get in the way. If your table(s) and/or fields contain any from the below (long) list, the string ‘_data’ will be added to the end. So:
table name: if results in a table named if_data
field name: then results in a table named then_data
table name: else results in a table named else_data
Please check the below list for any special words:
accessible,action,add,all,alter,analyze,and,as,asc,asensitive,before,between,bigint,binary,bit,blob,body,both,by,call,cascade,case,change,char,character,check,collate,column,condition,constraint,continue, convert,create,cross,current_date,current_time,current_timestamp,current_user,cursor,database,databases,date,day_hour,day_microsecond,day_minute,day_second,dec,decimal,declare,default,delayed,delete, desc,describe,deterministic,distinct,distinctrow,div,double,drop,dual,each,else,elseif,elsif,enclosed,enum,escaped,except,exists,exit,explain,false,fetch,float,float4,float8,for,force,foreign,from,fulltext,general,goto, grant,group,having,high_priority,history,hour_microsecond,hour_minute,hour_second,if,ignore,ignore_server_ids,in,index,infile,inner,inout,insensitive,insert,int,int1,int2,int3,int4,int8,integer,intersect,interval,into,is,iterate,join,key,keys,kill,leading,leave,left,like,limit,linear,lines,load,localtime,localtimestamp,lock,long,longblob,longtext,loop,low_priority,master_heartbeat_period,master_ssl_verify_server_cert,match, maxvalue,mediumblob,mediumint,mediumtext,middleint,minute_microsecond,minute_second,mod,modifies,natural,no,not,no_write_to_binlog,null,numeric,on,optimize,option,optionally,or,order,out,outer,outfile,over,package,partition,period,precision,primary,procedure,purge,raise,range,read,reads,read_write,real,recursive,references,regexp,release,rename,repeat,replace,require,resignal,restrict,return,returning, revoke,right,rlike,rows,rowtype,schema,schemas,second_microsecond,select,sensitive,separator,set,show,signal,slow,smallint,spatial,specific,sql,sql_big_result,sql_calc_found_rows,sqlexception, sql_small_result,sqlstate,sqlwarning,ssl,starting,straight_join,system,system_time,table,terminated,text,then,time,timestamp,tinyblob,tinyint,tinytext,to,trailing,trigger,true,undo,union,unique,unlock,unsigned, update,usage,use,using,utc_date,utc_time,utc_timestamp,values,varbinary,varchar,varcharacter,varying,versioning,when,where,while,window,with,without,write,xor,year_month,zerofill
What’s this hash_rec column I see?
This is an internal column created to give each row a unique identity and helps to identify new data. It’s part of our Datawarehouse design and is used to speed the loading of data. It’s similar to a Car Registration; it’s unique to the car. Without the hash_rec column, we’d need to reload the entire file each time (ouch).