NewsomSolutions Contributor

Comments

  • Wow. Good to know. Out of curiosity have you reached out to support about this? I would think that they'd have a way to point those hidden jobs to another User ID on the back end so they aren't just orphaned and hidden.
  • Nice write up @SDIEvan , for this instance did the Domo instance user get deleted not the Windows user on the box that saved the workbench jobs? I think when I read "we lost all the workbench jobs" I see that as you'd have to restore old versions. But if it was just that the domo instance user was removed, thus the…
  • I think for both of those you'd need to do it in an ETL process. For the concat, the Beast Mode to contact values, you'd need to specify them out, and there isn't funcationlity within a BM to go back and handle rows 1 by 1 and add them in there (that I can think of). For the distinct sum, you'd have to be able to…
  • @adrianiy why don't you just create an additional dataset at the end of your DataFlow called like 'DF Finished Status' or something...and create a card pointing to it...and then maybe have a field 'Finished' that can equal 1 or 2...and a date...then do an alert on the card to look for when finished = 1 and date <…
  • Buzz is integrated into Domo as part of the application. So you wouldn't be able to swap them out that I'm aware of. My org is a heavy slack company and I've thought about that ...replace buzz with slack. They way I sort of did it, is to use zapier email and then connect that to Domo for alerts...so the alerts are logged…
  • Don't feel silly, no reason for that, you're working on a tough BM w/ case statements...nothing to be silly about. I once opened a support ticket b/c I was moving too fast and forgot to put "select" in mysql query...when you do that you can then feel silly.
  • I've done something similar with using Single Bar chart option and then limit the Maximum Bars in the General settings and then sort by whatever value you're using for "top". But that wouldn't help your 'Grand Total". What you may have to do is do an ETL for this and do the top 20 grouping seperately and do the aggregation…
  • If you have your Netsuite dataset, set to APPEND, and you have C in a previous run and then in the next run someone deletes C, the next run will show C as blank like you describe. If you have the connector set to Replace, then you wouldn't see C at all because the schema will change to the new format. What I've always…
  • I'll try to look at this one a bit more closely later today, but as I quickly look at it...you have 3 CASE statements with only 1 End. they have to be 1:1. Also, again I dont know your data, but it may be helpful to take a step back in the process if you can and do something in ETL to help join the data so your…
  • 私はあなたが今言及しているのを見ます。 混乱させて申し訳ありません。 15のジョブを特定の時間帯に1回実行し、それが同じ時間にリセットされるかどうかを確認することができます。その後、ウィンドウを削除します。 しかし、完全なデータなしでレポートを表示することに問題がある場合は、Domo側でETLを作成し(ワークベンチではなくDomo)、実行後ではなくスケジュールに従ってETLを実行することができます。 何かが更新されました。 それが役立つことを願っています。
  • I think there are limits on how much data you can bring down from salesforce in one shot. you may have to limit it by month and then append to build your year.
  • この翻訳が間違っている場合はご容赦ください。英語からのGoogle翻訳を使用しています。 しかし、あなたは設定に入ることができます - そして同時ジョブを制限するための場所があるので、あなたがそれをチェックしないならあなたはそれらを同時に実行できるべきであると思います。 Workbench 4では、1つのスケジュールから複数のジョブをスケジュールできると思いますが、Workbench 5では、スケジュールはジョブと一緒に存続します。 私はとにかくワークベンチ4でそれをしたので私はそれについて間違っているかもしれません。 これが役に立つことを願っています。 マット
  • A participant by default can export card data out to excel/etc. But by default, they wouldn't rights on the dataset itself. If they HAVE to export data....What you should do is create a custom role. Go into Admin - Roles - in the top Right...click New - then select Particpant and then add in the Data rights for Edit…
  • Few questions: 1. You're wanting unique counts of those contact IDs? 2. Why does it look like you have 3 case statements in this, why not do everything in one? I'd write this up something like this, but because of your business rules or whatever, this may not work...but may help spur a better idea. (COUNT(DISTINCT( CASE…
  • In a table format, I've created a WoW usecase similar to your example below. With lets say stores are the rows...and the Weeks are the columns. So youd have 'Atlanta - '3 Weeks Prior', '2 Weeks Prior', '1 Week Prior', 'Current Week'....it tooks some ETL work, but all you'd need to do is use a grouping function and some…
  • @AttuAk You can't create a duplicate dataset...you can create a duplicate ETL and thus a duplicate dataset from that. But the beast modes wouldn't go to the copied version. You can use 'save as' for a card on a dataset that has BM associated with it and they'd go to the newly created copied card. If you're trying to 'back…
  • You have to go into the default group itself and in the member at the top...x that user out.
  • Is there publication groups involved in the card/page? Maybe the new owner of the card doesn't have access to the dataset - if Publication Groups are involved.
  • I know this is sort of an older case but want to put in that I ran into this error today (using wb 4) and ended up just re-creating the job and it cleared up. I think I had saved/edited so much something just got screwy on the back end.
  • Sure thing... Report = Decennial Census Geo Break Down = All US (you can use whatever I guess) Year = pick what you want Hope that helps.
  • There is a rural/urban housing one available for the US Census Bureau. You have to apply on the US Census site (thru domo is a link) to get the API key. The data shows rows by county b/c zip codes are a US postal service designation, not a legal jurisdiction. I've run into the same problem. You can then pull down something…
  • As for the DF, yea, SQL is so much simpler and as a former DBA it was my go-to, but too many drags on mysql forced me to make the change. It sucks sometimes setting up and it will take some time to get used to it and how to think about it, but performance wise for me it was a much better move. As for the comparisons, if…
  • Great. 1. First off, MySQL is slower in performance than Magic ETL. To me so is RedShift, but officially I'm not sure that is the case. 2. Nice 3. Makes sense. If you are using a card that does time period comparisons, does that not work for you vs writing out your own dataflow/BM comparisons?
  • I'm going to follow this b/c I am bringing in data and some may come in at the billion row level. But a few questions: 1. When you have slow dataflows, are they using Magic ETL, MySQL, or RedShift? 2. Have you tried thinning out your datasets and possibly structuring your data differently for the cards? What I mean is that…
  • Did you look at the business in a box options? May not apply to your specific industry use case, but there are some handy sample examples in there for different departments/needs.
  • 99.9% confident it is server. What may take long is other details in your visualization (page filters, dataset refreshing then rebuilding visual, etc).
  • Answers: 1. no stored system variable - must find it myself using a date/id field and go from there....build recursive to just filter out the dupes. 2. Maybe - but you have to build code to export source data to a flat file to then import in your stream which sucks. Alternative: There is an option for data assembler for…