SSRS Scheduled Jobs to clean up report tables

Is there a schedule job that should be running to clean up report tables? Or is that core functionality. I ask because our Report DB is over 10 GB, and there are 168000+ tables. That seems a bit high to me, or maybe that is normal.

Thanks,

Ken

Did you ever find a resolution to this? We are having the same problem.

In 10.0 there was an issue, but this shouldn’t be present in 10.1.400/10.1.500/10.1.600. Support has a SQL script we can send you that will clear up the SSRS/Report database in 10.0.

In 10.1.600.6+, we don’t persist the SSRS/ Report DB data after the report is successfully rendered unless the archive period is set before the report is submitted.

Once the archive period has passed the SSRS (GUID) tables should be removed. Is there a process which removes these tables or are we on our own? Seems there should be an Epicor process to do this.

Which version are you on?

10.1.600.8

They those GUID tables shouldn’t persist beyond the rendering of the report unless there is an archive period set–we remove those tables immediately as they aren’t actually needed to reprint/print preview from the System Monitor. NOTE: there is an SCR regarding MICR check GUID data persisting and that will be addressed in a future point update.

By any chance, did you upgrade from an Epicor version of Epicor and use the same SSRS database as that earlier version?

In SSMS, click on your SSRS dataset db > Table and select View > Object Explorer Details (or F7). Sort by create Date. do you have tables that were created today/yesterday in your list?

I show tables created both today and yesterday. In fact there are currently 19621 tables between 5/22/17 and today.
We migrated from [Uploading…] to 10.1.600.8 on 8/11/17. Are we missing a task to delete these records?

We had to manually drop the tables. There is a report table that stores the GUID. We had quite a few 10K that were orphaned, as they didn’t have a record in the database. Once I got the tables to a normal size the Epicor process appears to work.

We were on 10.0 and upgrade to 10.1.400. Support thought the earlier version was the reason.

Just checked the DB and it is at 500MB.

Hey,

We’re having the same issue with 10.1.600 too where our SSRS cache is building up. Over 270000 tables currently.

I had to use a count select query to get the table size as it crashes my SQL Mngt Studio client when I try to open the list of tables.

I opened a case CS0000651783, with support hopefully any ideas how to clear the old data without crashing SSMS?

1 Like

Epicor support pointed to the newest patch but has yet to confirm it’ll actually clear the cached tables in the SSRS Database. I’m going to ask our systems guys to put the patch on the test server anyway.

I did find this that has a outline of a script but use at your own risk:

1 Like

Nice get out of Jail card there Randy…

In all seriousness you can use SQL for lots of fun stuff it is very very powerful, but with power comes responsibility, like the responsibility that you have good made backups beforehand. :slight_smile:

I recall the latest version of SSMS is 17.3, but If you need to do TSQL querying then you can use either Powershell or OSQL if you had to.

1 Like

We’re on 10.1.400.9 and I have tables from the end of March. These should be cleaned up automatically one would think.

@EpicorAnon 10.1.400 you say? Epicor support mentioned to me that the later patches remove the files after the report is generated. A newer patch so that’s probably the best way to go.

We have pretty high volume, we’ve been added about 12-13K tables to the E10_Reports DB daily. They are piling up pretty quick and were told it’s – at least partly-- the cause of some of our performance issues. Doesn’t hurt to try other avenues on a TEST server.

We’re exploring a newer patch ourselves but may need to “limp” along until then as that process takes time to test.

1 Like

Just to be clearer, did Epicor confirm the problem does exist in 10.1.400.16? Also, is there a retention period setting that one can change?

Thanks,

Looking at my notes we had two issues.

Issue one. File was locked on the server. I removed the file and this error went away.

LiveTaskAgent: The following error occurred while purging reports:

Error Detail:
Access to the path '\\E10\EpicorData\Reports\user\GL Financial Report78188.xml' is denied.
Stack Trace: at Epicor.ServiceModel.Channels.ImplBase`1.ShouldRethrowNonRetryableException(Exception ex, DataSet[] dataSets) in c:\_Releases\ICE\3.1.400.18\source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 709
 at Ice.Proxy.BO.SysAgentImpl.PurgeReports(Nullable`1 PurgeDate) in c:\_Releases\ICE\3.1.400.18\source\Shared\Contracts\BO\SysAgent\SysAgentImpl.cs:line 841
 at Ice.TaskAgentCore.TaskAgent.<>c__DisplayClass21.<RunAgentWorker>b__a(SysAgentImpl impl)
 at Ice.TaskAgentCore.ImplCaller.ImplCaller`1.<>c__DisplayClass1.<Call>b__0(TImpl impl)
 at Ice.TaskAgentCore.ImplCaller.ImplCaller`1.Call[TResult](Func`2 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior)
 at Ice.TaskAgentCore.ImplCaller.ImplCaller`1.Call(Action`1 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior)
 at Ice.TaskAgentCore.TaskAgent.<RunAgentWorker>b__9(DateTime purgeTime)
 at Ice.TaskAgentCore.ReportPurger.Purge(DateTime purgeTime)

If you are familiar with SQL Server this will be straight forward. If not, please don’t do this.

There is a table in your EPICOR Database called Ice.SysRptLst.

Take a look at the count for table. We had the count over 10000. This is the number of reports still being tracked by Epicor. Once the Report Purge process ran successfully our table count went down to 320.

Run the following query:
select * from Ice.SysRptLst where Archived = 0

If you have reports on the list that the PurgeDate has already past, you might have the first issue. I would copy all of the files in the \E10\EpicorData\Reports\user over to another place. To remove file permissions as part of the issue.

Issue 2:

We also had another issue where after the Purge Ran, there were orphan report tables in the Report Database. We identified these tables by looking at the column in the Ice.SysRptLst table called FileName, which stores the GUID of the report table.

It looks like this: REPORT DATABASE: e565ffd7b80c4c71b0b9746a13172938

The e565ffd7b80c4c71b0b9746a13172938 is the dbo.Company_e565ffd7b80c4c71b0b9746a13172938 table.

I created a query to show all sys.Tables in the reporting database where GUID from the FileName does not exist.

Those table will never get purged because Epicor doesn’t know they exist. I then issued the drop command on those tables. When you get to this step, please make sure you create a backup, just in case.

1 Like

@knash

Then part of our issue is the first one, as we do have records in that ICE table beyond the archive period. The System Agent “purge frequency” is the setting I’m referring to.

Take a look at your Event Log. Are there errors like the following.

That is an easier fix.

I’ll take a look but we aren’t export SSRS to xml files so probably not.

I think everyone on here is close to the truth on this…

Background:
Report Generation consists of a few steps. The first is querying for all the data, doing the needed calculations and building up data for the display and caching in the ‘report database’ as the table mentioned before. The design trade offs are interesting but for a later post. The point is to stream as much as possible from the transactional db into the report cache and not suck all memory on the app server.
There has always been a wish list item on the roadmap to take advantage of the cache with future report generations but we’ve not had that bubble to the top of the backlog. It is useful as an archival/audit tool - you can set archive time for 0, a day, a week, etc.

Task Agent has a clean up task to purge the records as needed. That algorithm has been what was patched in 600.x forward as mentioned above. The frustration has been not knowing why the tables are not being picked up by the algorithm. We continue to follow that concern and are trying to find why they are being missed. Some alternatives to the algorithm are under investigation. We are still trying to find that smoking gun that causes the issue.

The workaround mentioned above for manually deleting the tables is fine. Once they should be purged, they are 'orphaned temp files' just like your friendly neighborhood temp folder in Windows. A simple truncate over the tables older than a week or whatever the archive setting is 'safe'.

If you have thousands from weeks or months ago, delete away.
</>

If anyone can find a smoking gun or has some suspicions, please forward them our way.

2 Likes